text
stringlengths
20
3.06k
13. 3. Save Your Neural Network Model to YAML91model = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X, Y, nb_epoch=150, batch_size=10, verbose=0)#evaluatethemodelscores = model. evaluate(X, Y, verbose=0)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))#serializemodelto YAMLmodel_yaml = model. to_yaml()withopen("model. yaml","w") as yaml_file:yaml_file. write(model_yaml)#serializeweightsto HDF5model. save_weights("model. h5")print("Savedmodeltodisk")#later... #load YAMLandcreatemodelyaml_file =open( model. yaml, r )loaded_model_yaml = yaml_file. read()yaml_file. close()loaded_model = model_from_yaml(loaded_model_yaml)#loadweightsintonewmodelloaded_model. load_weights("model. h5")print("Loadedmodelfromdisk")#evaluateloadedmodelontestdataloaded_model. compile(loss= binary_crossentropy, optimizer= rmsprop, metrics=[ accuracy ])score = loaded_model. evaluate(X, Y, verbose=0)print"%s:%. 2f%%"% (loaded_model. metrics_names[1], score[1]*100)Listing 13. 5: Serialize Model To YAML Format. Running the example displays the output below. Again, the example demonstrates theaccuracy of the model, the model serialization, deserialization and re-evaluation achieving thesame results. acc: 79. 56%Saved model to disk Loaded modelfromdiskacc: 79. 56%Listing 13. 6: Sample Output From Serializing Model To YAML Format. The model described in YAML format looks like the following:class_name: Sequentialconfig:-class_name: Denseconfig:W_constraint: null W_regularizer: null
13. 4. Summary92activation: reluactivity_regularizer: nullb_constraint: nullb_regularizer: nullbatch_input_shape: !!python/tuple[null, 8]init: uniforminput_dim: 8input_dtype: float32name: dense_1output_dim: 12trainable: true-class_name: Denseconfig: {W_constraint: null, W_regularizer: null, activation: relu, activity_regularizer:null,b_constraint: null, b_regularizer: null, init: uniform, input_dim: null, name: dense_2,output_dim: 8, trainable: true}-class_name: Denseconfig: {W_constraint: null, W_regularizer: null, activation: sigmoid,activity_regularizer: null,b_constraint: null, b_regularizer: null, init: uniform, input_dim: null, name: dense_3,output_dim: 1, trainable: true}Listing 13. 7: Sample YAML Model File. 13. 4 Summary Saving and loading models is an important capability for transplanting a deep learning modelfrom research and development to operations. In this lesson you discovered how to serialize your Keras deep learning models. You learned: How to save model weights to HDF5 formatted files and load them again later. How to save Keras model definitions to JSON files and load them again. How to save Keras model definitions to YAML files and load them again. 13. 4. 1 Next You now know how to serialize your deep learning models in Keras. Next you will discover theimportance of checkpointing your models during long training periods and how to load thosecheckpointed models in order to make predictions.
Chapter 14Keep The Best Models During Training With Checkpointing Deep learning models can take hours, days or even weeks to train and if a training run is stoppedunexpectedly, you can lose a lot of work. In this lesson you will discover how you can checkpointyour deep learning models during training in Python using the Keras library. After completingthis lesson you will know: The importance of checkpointing neural network models when training. How to checkpoint each improvement to a model during training. How to checkpoint the very best model observed during training. Let's get started. 14. 1 Checkpointing Neural Network Models Application checkpointing is a fault tolerance technique for long running processes. It is anapproach where a snapshot of the state of the system is taken in case of system failure. If thereis a problem, not all is lost. The checkpoint may be used directly, or used as the starting pointfor a new run, picking up where it left o↵. When training deep learning models, the checkpointcaptures the weights of the model. These weights can be used to make predictions as-is, or usedas the basis for ongoing training. The Keras library provides a checkpointing capability by a callback API. The Model Checkpointcallback class allows you to define where to checkpoint the model weights, how the file shouldbe named and under what circumstances to make a checkpoint of the model. The API allowsyou to specify which metric to monitor, such as loss or accuracy on the training or validationdataset. You can specify whether to look for an improvement in maximizing or minimizingthe score. Finally, the filename that you use to store the weights can include variables like theepoch number or metric. The Model Checkpointinstance can then be passed to the trainingprocess when calling thefit()function on the model. Note, you may need to install theh5pylibrary (see Section13. 1. 1). 93
14. 2. Checkpoint Neural Network Model Improvements9414. 2 Checkpoint Neural Network Model Improvements A good use of checkpointing is to output the model weights each time an improvement isobserved during training. The example below creates a small neural network for the Pima Indians onset of diabetes binary classification problem (see Section7. 2). The example uses 33%of the data for validation. Checkpointing is setup to save the network weights only when there is an improvement inclassification accuracy on the validation dataset (monitor='valacc'andmode='max'). Theweights are stored in a file that includes the score in the filenameweights-improvement-valacc=. 2f. hdf5. #Checkpointtheweightswhenvalidationaccuracyimprovesfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. callbacksimport Model Checkpointimportmatplotlib. pyplot as pltimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#checkpointfilepath="weights-improvement-{epoch:02d}-{val_acc:. 2f}. hdf5"checkpoint = Model Checkpoint(filepath, monitor= val_acc, verbose=1, save_best_only=True,mode= max )callbacks_list = [checkpoint]#Fitthemodelmodel. fit(X, Y, validation_split=0. 33, nb_epoch=150, batch_size=10,callbacks=callbacks_list, verbose=0)Listing 14. 1: Checkpoint Model Improvements. Running the example produces the output below, truncated for brevity. In the output youcan see cases where an improvement in the model accuracy on the validation dataset resulted ina new weight file being written to disk....Epoch 00134: val_acc didnotimprove Epoch 00135: val_acc didnotimprove Epoch 00136: val_acc didnotimprove Epoch 00137: val_acc didnotimprove Epoch 00138: val_acc didnotimprove Epoch 00139: val_acc didnotimprove
14. 3. Checkpoint Best Neural Network Model Only95Epoch 00140: val_acc improvedfrom0. 83465 to 0. 83858, saving model toweights-improvement-140-0. 84. hdf5Epoch 00141: val_acc didnotimprove Epoch 00142: val_acc didnotimprove Epoch 00143: val_acc didnotimprove Epoch 00144: val_acc didnotimprove Epoch 00145: val_acc didnotimprove Epoch 00146: val_acc improvedfrom0. 83858 to 0. 84252, saving model toweights-improvement-146-0. 84. hdf5Epoch 00147: val_acc didnotimprove Epoch 00148: val_acc improvedfrom0. 84252 to 0. 84252, saving model toweights-improvement-148-0. 84. hdf5Epoch 00149: val_acc didnotimprove Listing 14. 2: Sample Output From Checkpoint Model Improvements. You will also see a number of files in your working directory containing the network weightsin HDF5 format. For example:... weights-improvement-74-0. 81. hdf5weights-improvement-81-0. 82. hdf5weights-improvement-91-0. 82. hdf5weights-improvement-93-0. 83. hdf5Listing 14. 3: Sample Model Checkpoint Files. This is a very simple checkpointing strategy. It may create a lot of unnecessary checkpointfiles if the validation accuracy moves up and down over training epochs. Nevertheless, it willensure that you have a snapshot of the best model discovered during your run. 14. 3 Checkpoint Best Neural Network Model Only A simpler checkpoint strategy is to save the model weights to the same file, if and only if thevalidation accuracy improves. This can be done easily using the same code from above andchanging the output filename to be fixed (not include score or epoch information). In this case,model weights are written to the fileweights. best. hdf5only if the classification accuracy ofthe model on the validation dataset improves over the best seen so far. #Checkpointtheweightsforbestmodelonvalidationaccuracyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. callbacksimport Model Checkpointimportmatplotlib. pyplot as pltimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()
14. 4. Loading a Saved Neural Network Model96model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#checkpointfilepath="weights. best. hdf5"checkpoint = Model Checkpoint(filepath, monitor= val_acc, verbose=1, save_best_only=True,mode= max )callbacks_list = [checkpoint]#Fitthemodelmodel. fit(X, Y, validation_split=0. 33, nb_epoch=150, batch_size=10,callbacks=callbacks_list, verbose=0)Listing 14. 4: Checkpoint Best Model Only. Running this example provides the following output (truncated for brevity):... Epoch 00136: val_acc didnotimprove Epoch 00137: val_acc didnotimprove Epoch 00138: val_acc didnotimprove Epoch 00139: val_acc didnotimprove Epoch 00140: val_acc improvedfrom0. 83465 to 0. 83858, saving model to weights. best. hdf5Epoch 00141: val_acc didnotimprove Epoch 00142: val_acc didnotimprove Epoch 00143: val_acc didnotimprove Epoch 00144: val_acc didnotimprove Epoch 00145: val_acc didnotimprove Epoch 00146: val_acc improvedfrom0. 83858 to 0. 84252, saving model to weights. best. hdf5Epoch 00147: val_acc didnotimprove Epoch 00148: val_acc improvedfrom0. 84252 to 0. 84252, saving model to weights. best. hdf5Epoch 00149: val_acc didnotimprove Listing 14. 5: Sample Output From Checkpoint The Best Model. You should see the weight file in your local directory. weights. best. hdf5Listing 14. 6: Sample Best Model Checkpoint File. 14. 4 Loading a Saved Neural Network Model Now that you have seen how to checkpoint your deep learning models during training, you needto review how to load and use a checkpointed model. The checkpoint only includes the modelweights. It assumes you know the network structure. This too can be serialize to file in JSONor YAML format. In the example below, the model structure is known and the best weights areloaded from the previous experiment, stored in the working directory in theweights. best. hdf5file. The model is then used to make predictions on the entire dataset. #Howtoloadanduseweightsfromacheckpointfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. callbacksimport Model Checkpoint
14. 5. Summary97importmatplotlib. pyplot as pltimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#loadweightsmodel. load_weights("weights. best. hdf5")#Compilemodel(requiredtomakepredictions)model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print("Createdmodelandloadedweightsfromfile")#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#estimateaccuracyonwholedatasetusingloadedweightsscores = model. evaluate(X, Y, verbose=0)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))Listing 14. 7: Load and Evaluate a Model Checkpoint. Running the example produces the following output:Created modelandloaded weightsfromfileacc: 77. 73%Listing 14. 8: Sample Output From Loading and Evaluating a Model Checkpoint. 14. 5 Summary In this lesson you have discovered the importance of checkpointing deep learning models forlong training runs. You learned: How to use Keras to checkpoint each time an improvement to the model is observed. How to only checkpoint the very best model observed during training. How to load a checkpointed model from file and use it later to make predictions. 14. 5. 1 Next You now know how to checkpoint your deep learning models in Keras during long trainingschemes. In the next lesson you will discover how to collect, inspect and plot metrics collectedabout your model during training.
Chapter 15Understand Model Behavior During Training By Plotting History You can learn a lot about neural networks and deep learning models by observing theirperformance over time during training. In this lesson you will discover how you can review andvisualize the performance of deep learning models over time during training in Python with Keras. After completing this lesson you will know: How to inspect the history metrics collected during training. How to plot accuracy metrics on training and validation datasets during training. How to plot model loss metrics on training and validation datasets during training. Let's get started. 15. 1 Access Model Training History in Keras Keras provides the capability to register callbacks when training a deep learning model. One ofthe default callbacks that is registered when training all deep learning models is the Historycallback. It records training metrics for each epoch. This includes the loss and the accuracy (forclassification problems) as well as the loss and accuracy for the validation dataset, if one is set. The history object is returned from calls to thefit()function used to train the model. Metrics are stored in a dictionary in thehistorymember of the object returned. For example,you can list the metrics collected in a history object using the following snippet of code after amodel is trained:#listalldatainhistoryprint(history. history. keys())Listing 15. 1: Output Recorded History Metric Names. For example, for a model trained on a classification problem with a validation dataset, thismight produce the following listing:[ acc, loss, val_acc, val_loss ]Listing 15. 2: Sample Output From Recorded History Metric Names. 98
15. 2. Visualize Model Training History in Keras99We can use the data collected in the history object to create plots. The plots can provide anindication of useful things about the training of the model, such as: It's speed of convergence over epochs (slope). Whether the model may have already converged (plateau of the line). Whether the model may be over-learning the training data (inflection for validation line). And more. 15. 2 Visualize Model Training History in Keras We can create plots from the collected history data. In the example below we create asmall network to model the Pima Indians onset of diabetes binary classification problem (see Section7. 2). The example collects the history, returned from training the model and createstwo charts:1. A plot of accuracy on the training and validation datasets over training epochs. 2. A plot of loss on the training and validation datasets over training epochs. #Visualizetraininghistoryfromkeras. modelsimport Sequentialfromkeras. layersimport Denseimportmatplotlib. pyplot as pltimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelhistory = model. fit(X, Y, validation_split=0. 33, nb_epoch=150, batch_size=10, verbose=0)#listalldatainhistoryprint(history. history. keys())#summarizehistoryforaccuracyplt. plot(history. history[ acc ])plt. plot(history. history[ val_acc ])plt. title( modelaccuracy )plt. ylabel( accuracy )plt. xlabel( epoch )
15. 2. Visualize Model Training History in Keras100plt. legend([ train, test ], loc= upperleft )plt. show()#summarizehistoryforlossplt. plot(history. history[ loss ])plt. plot(history. history[ val_loss ])plt. title( modelloss )plt. ylabel( loss )plt. xlabel( epoch )plt. legend([ train, test ], loc= upperleft )plt. show()Listing 15. 3: Evaluate a Model and Plot Training History. The plots are provided below. The history for the validation dataset is labeled test byconvention as it is indeed a test dataset for the model. From the plot of accuracy we can see thatthe model could probably be trained a little more as the trend for accuracy on both datasets isstill rising for the last few epochs. We can also see that the model has not yet over-learned thetraining dataset, showing comparable skill on both datasets. Figure 15. 1: Plot of Model Accuracy on Train and Validation Datasets From the plot of loss, we can see that the model has comparable performance on both trainand validation datasets (labeled test). If these parallel plots start to depart consistently, itmight be a sign to stop training at an earlier epoch.
15. 3. Summary101 Figure 15. 2: Plot of Model Loss on Training and Validation Datasets15. 3 Summary In this lesson you discovered the importance of collecting and reviewing metrics during thetraining of your deep learning models. You learned: How to inspect a history object returned from training to discover the metrics that werecollected. How to extract model accuracy information for training and validation datasets and plotthe data. How to extract and plot the model loss information calculated from training and validationdatasets. 15. 3. 1 Next A simple yet very powerful technique for decreasing the amount of overfitting of your model totraining data is called dropout. In the next lesson you will discover the dropout technique, howto apply it to visible and hidden layers in Keras and best practices for using it on your ownproblems.
Chapter 16Reduce Overfitting With Dropout Regularization A simple and powerful regularization technique for neural networks and deep learning models isdropout. In this lesson you will discover the dropout regularization technique and how to applyit to your models in Python with Keras. After completing this lesson you will know: How the dropout regularization technique works. How to use dropout on your input and hidden layers. How to use dropout on your hidden layers. Let's get started. 16. 1 Dropout Regularization For Neural Networks Dropout is a regularization technique for neural network models proposed by Srivastava, et al. in their 2014 paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting1. Dropout is a technique where randomly selected neurons are ignored during training. Theyaredropped-outrandomly. This means that their contribution to the activation of downstreamneurons is temporally removed on the forward pass and any weight updates are not applied tothe neuron on the backward pass. As a neural network learns, neuron weights settle into their context within the network. Weights of neurons are tuned for specific features providing some specialization. Neighboringneurons become to rely on this specialization, which if taken too far can result in a fragile modeltoo specialized to the training data. This reliant on context for a neuron during training isreferred to ascomplex co-adaptations. You can imagine that if neurons are randomly droppedout of the network during training, that other neurons will have to step in and handle therepresentation required to make predictions for the missing neurons. This is believed to resultin multiple independent internal representations being learned by the network. The e↵ect is that the network becomes less sensitive to the specific weights of neurons. Thisin turn results in a network that is capable of better generalization and is less likely to overfitthe training data. 1http://jmlr. org/papers/v15/srivastava14a. html102
16. 2. Dropout Regularization in Keras10316. 2 Dropout Regularization in Keras Dropout is easily implemented by randomly selecting nodes to be dropped-out with a givenprobability (e. g. 20%) each weight update cycle. This is how Dropout is implemented in Keras. Dropout is only used during the training of a model and is not used when evaluating the skill ofthe model. Next we will explore a few di↵erent ways of using Dropout in Keras. The examples will use the Sonar dataset binary classification dataset (learn more in Sec-tion11. 1). We will evaluate the developed models using scikit-learn with 10-fold cross validation,in order to better tease out di↵erences in the results. There are 60 input values and a singleoutput value and the input values are standardized before being used in the network. Thebaseline neural network model has two hidden layers, the first with 60 units and the secondwith 30. Stochastic gradient descent is used to train the model with a relatively low learningrate and momentum. The full baseline model is listed below. #Baseline Modelonthe Sonar Datasetimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. wrappers. scikit_learnimport Keras Classifierfromkeras. constraintsimportmaxnormfromkeras. optimizersimport SGDfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#baselinedefcreate_baseline():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu ))model. add(Dense(30, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelsgd = SGD(lr=0. 01, momentum=0. 8, decay=0. 0, nesterov=False)model. compile(loss= binary_crossentropy, optimizer=sgd, metrics=[ accuracy ])returnmodel
16. 3. Using Dropout on the Visible Layer104numpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_baseline, nb_epoch=300,batch_size=16, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Baseline:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 16. 1: Baseline Neural Network For The Sonar Dataset. Running the example for the baseline model without drop-out generates an estimatedclassification accuracy of 82%. Baseline: 82. 68% (3. 90%)Listing 16. 2: Sample Output From Baseline Neural Network For The Sonar Dataset. 16. 3 Using Dropout on the Visible Layer Dropout can be applied to input neurons called the visible layer. In the example below we add anew Dropout layer between the input (or visible layer) and the first hidden layer. The dropoutrate is set to 20%, meaning one in five inputs will be randomly excluded from each update cycle. Additionally, as recommended in the original paper on dropout, a constraint is imposed onthe weights for each hidden layer, ensuring that the maximum norm of the weights does notexceed a value of 3. This is done by setting the Wconstraintargument on the Denseclasswhen constructing the layers. The learning rate was lifted by one order of magnitude and themomentum was increased to 0. 9. These increases in the learning rate were also recommended inthe original dropout paper. Continuing on from the baseline example above, the code belowexercises the same network with input dropout. #Exampleof Dropoutonthe Sonar Dataset:Visible Layerimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. wrappers. scikit_learnimport Keras Classifierfromkeras. constraintsimportmaxnormfromkeras. optimizersimport SGDfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables
16. 4. Using Dropout on Hidden Layers105X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#dropoutintheinputlayerwithweightconstraintdefcreate_model():#createmodelmodel = Sequential()model. add(Dropout(0. 2, input_shape=(60,)))model. add(Dense(60, init= normal, activation= relu, W_constraint=maxnorm(3)))model. add(Dense(30, init= normal, activation= relu, W_constraint=maxnorm(3)))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelsgd = SGD(lr=0. 1, momentum=0. 9, decay=0. 0, nesterov=False)model. compile(loss= binary_crossentropy, optimizer=sgd, metrics=[ accuracy ])returnmodelnumpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_model, nb_epoch=300,batch_size=16, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Visible:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 16. 3: Example of Using Dropout on the Visible Layer. Running the example with dropout in the visible layer provides a nice lift in classificationaccuracy to 86%. Visible: 86. 04% (6. 33%)Listing 16. 4: Sample Output From Example of Using Dropout on the Visible Layer. 16. 4 Using Dropout on Hidden Layers Dropout can be applied to hidden neurons in the body of your network model. In the examplebelow dropout is applied between the two hidden layers and between the last hidden layer andthe output layer. Again a dropout rate of 20% is used as is a weight constraint on those layers. #Exampleof Dropoutonthe Sonar Dataset:Hidden Layerimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. wrappers. scikit_learnimport Keras Classifierfromkeras. constraintsimportmaxnormfromkeras. optimizersimport SGDfromsklearn. model_selectionimportcross_val_score
16. 4. Using Dropout on Hidden Layers106fromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#dropoutinhiddenlayerswithweightconstraintdefcreate_model():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu,W_constraint=maxnorm(3)))model. add(Dropout(0. 2))model. add(Dense(30, init= normal, activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 2))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelsgd = SGD(lr=0. 1, momentum=0. 9, decay=0. 0, nesterov=False)model. compile(loss= binary_crossentropy, optimizer=sgd, metrics=[ accuracy ])returnmodelnumpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_model, nb_epoch=300,batch_size=16, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Hidden:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 16. 5: Example of Using Dropout on Hidden Layers. We can see that for this problem and for the chosen network configuration that using dropoutin the hidden layers did not lift performance. In fact, performance was worse than the baseline. It is possible that additional training epochs are required or that further tuning is required tothe learning rate. Hidden: 83. 09% (7. 63%)Listing 16. 6: Sample Output From Example of Using Dropout on the Hidden Layers.
16. 5. Tips For Using Dropout10716. 5 Tips For Using Dropout The original paper on Dropout provides experimental results on a suite of standard machinelearning problems. As a result they provide a number of useful heuristics to consider when usingdropout in practice: Generally use a small dropout value of 20%-50% of neurons with 20% providing a goodstarting point. A probability too low has minimal e↵ect and a value too high results inunder-learning by the network. Use a larger network. You are likely to get better performance when dropout is usedon a larger network, giving the model more of an opportunity to learn independentrepresentations. Use dropout on input (visible) as well as hidden layers. Application of dropout at eachlayer of the network has shown good results. Use a large learning rate with decay and a large momentum. Increase your learning rateby a factor of 10 to 100 and use a high momentum value of 0. 9 or 0. 99. Constrain the size of network weights. A large learning rate can result in very largenetwork weights. Imposing a constraint on the size of network weights such as max-normregularization with a size of 4 or 5 has been shown to improve results. 16. 6 Summary In this lesson you discovered the dropout regularization technique for deep learning models. You learned: What dropout is and how it works. How you can use dropout on your own deep learning models. Tips for getting the best results from dropout on your own models. 16. 6. 1 Next Another important technique for improving the performance of your neural network models isto adapt the learning rate during training. In the next lesson you will discover di↵erent learningrate schedules and how you can apply them with Keras to your own problems.
Chapter 17Lift Performance With Learning Rate Schedules Training a neural network or large deep learning model is a dicult optimization task. Theclassical algorithm to train neural networks is called stochastic gradient descent. It has been wellestablished that you can achieve increased performance and faster training on some problemsby using a learning rate that changes during training. In this lesson you will discover how youcan use di↵erent learning rate schedules for your neural network models in Python using the Keras deep learning library. After completing this lesson you will know: The benefit of learning rate schedules on lifting model performance during training. How to configure and evaluate a time-based learning rate schedule. How to configure and evaluate a drop-based learning rate schedule. Let's get started. 17. 1 Learning Rate Schedule For Training Models Adapting the learning rate for your stochastic gradient descent optimization procedure canincrease performance and reduce training time. Sometimes this is called learning rate annealingor adaptive learning rates. Here we will call this approach a learning rate schedule, were thedefault schedule is to use a constant learning rate to update network weights for each trainingepoch. The simplest and perhaps most used adaptation of learning rates during training aretechniques that reduce the learning rate over time. These have the benefit of making largechanges at the beginning of the training procedure when larger learning rate values are used,and decreasing the learning rate such that a smaller rate and therefore smaller training updatesare made to weights later in the training procedure. This has the e↵ect of quickly learning goodweights early and fine tuning them later. Two popular and easy to use learning rate schedulesare as follows: Decrease the learning rate gradually based on the epoch. Decrease the learning rate using punctuated large drops at specific epochs. 108
17. 2. Ionosphere Classification Dataset109Next, we will look at how you can use each of these learning rate schedules in turn with Keras. 17. 2 Ionosphere Classification Dataset The Ionosphere binary classification problem is used as a demonstration in this lesson. Thedataset describes radar returns where the target was free electrons in the ionosphere. It is abinary classification problem where positive cases (gfor good) show evidence of some typeof structure in the ionosphere and negative cases (bfor bad) do not. It is a good dataset forpracticing with neural networks because all of the inputs are small numerical values of the samescale. There are 34 attributes and 351 observations. State-of-the-art results on this dataset achieve an accuracy of approximately 94% to 98%accuracy using 10-fold cross validation1. The dataset is available within the code bundle providedwith this book. Alternatively, you can download it directly from the UCI Machine Learningrepository2. Place the data file in your working directory with the filenameionosphere. csv. You can learn more about the ionosphere dataset on the UCI Machine Learning Repositorywebsite3. 17. 3 Time-Based Learning Rate Schedule Keras has a time-based learning rate schedule built in. The stochastic gradient descent optimiza-tion algorithm implementation in the SGDclass has an argument called decay. This argument isused in the time-based learning rate decay schedule equation as follows:Learning Rate=Learning Rate⇥11+decay⇥epoch Figure 17. 1: Calculate Learning Rate For Time-Based Decay. When the decay argument is zero (the default), this has no e↵ect on the learning rate (e. g. 0. 1). Learning Rate = 0. 1 * 1/(1 + 0. 0 * 1)Learning Rate = 0. 1Listing 17. 1: Example Calculating Learning Rate Without Decay. When the decay argument is specified, it will decrease the learning rate from the previousepoch by the given fixed amount. For example, if we use the initial learning rate value of 0. 1and the decay of 0. 001, the first 5 epochs will adapt the learning rate as follows:Epoch Learning Rate1 0. 11http://www. is. umk. pl/projects/datasets. html#Ionosphere2http://archive. ics. uci. edu/ml/machine-learning-databases/ionosphere/ionosphere. data3https://archive. ics. uci. edu/ml/datasets/Ionosphere
17. 3. Time-Based Learning Rate Schedule1102 0. 09990009993 0. 09970069854 0. 099402491035 0. 09900646517Listing 17. 2: Output of Calculating Learning Rate With Decay. Extending this out to 100 epochs will produce the following graph of learning rate (y-axis)versus epoch (x-axis): Figure 17. 2: Time-Based Learning Rate Schedule You can create a nice default schedule by setting the decay value as follows:Decay = Learning Rate / Epochs Decay = 0. 1 / 100Decay = 0. 001Listing 17. 3: Example of A Good Default Decay Rate. The example below demonstrates using the time-based learning rate adaptation schedule in Keras. A small neural network model is constructed with a single hidden layer with 34 neuronsand using the rectifier activation function. The output layer has a single neuron and uses thesigmoid activation function in order to output probability-like values. The learning rate forstochastic gradient descent has been set to a higher value of 0. 1. The model is trained for 50epochs and the decay argument has been set to 0. 002, calculated as0. 150. Additionally, it canbe a good idea to use momentum when using an adaptive learning rate. In this case we use amomentum value of 0. 8. The complete example is listed below. #Time Based Learning Rate Decay
17. 3. Time-Based Learning Rate Schedule111importpandasimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. optimizersimport SGDfromsklearn. preprocessingimport Label Encoder#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("ionosphere. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:34]. astype(float)Y = dataset[:,34]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)Y = encoder. transform(Y)#createmodelmodel = Sequential()model. add(Dense(34, input_dim=34, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelepochs = 50learning_rate = 0. 1decay_rate = learning_rate / epochsmomentum = 0. 8sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False)model. compile(loss= binary_crossentropy, optimizer=sgd, metrics=[ accuracy ])#Fitthemodelmodel. fit(X, Y, validation_split=0. 33, nb_epoch=epochs, batch_size=28, verbose=2)Listing 17. 4: Example of Time-Based Learning Rate Decay. The model is trained on 67% of the dataset and evaluated using a 33% validation dataset. Running the example shows a classification accuracy of 99. 14%. This is higher than the baselineof 95. 69% without the learning rate decay or momentum. Epoch 46/500s-loss: 0. 0570-acc: 0. 9830-val_loss: 0. 0867-val_acc: 0. 9914Epoch 47/500s-loss: 0. 0584-acc: 0. 9830-val_loss: 0. 0808-val_acc: 0. 9914Epoch 48/500s-loss: 0. 0610-acc: 0. 9872-val_loss: 0. 0653-val_acc: 0. 9828Epoch 49/500s-loss: 0. 0591-acc: 0. 9830-val_loss: 0. 0821-val_acc: 0. 9914Epoch 50/500s-loss: 0. 0598-acc: 0. 9872-val_loss: 0. 0739-val_acc: 0. 9914Listing 17. 5: Sample Output of Time-Based Learning Rate Decay.
17. 4. Drop-Based Learning Rate Schedule11217. 4 Drop-Based Learning Rate Schedule Another popular learning rate schedule used with deep learning models is to systematicallydrop the learning rate at specific times during training. Often this method is implemented bydropping the learning rate by half every fixed number of epochs. For example, we may have aninitial learning rate of 0. 1 and drop it by a factor of 0. 5 every 10 epochs. The first 10 epochs oftraining would use a value of 0. 1, in the next 10 epochs a learning rate of 0. 05 would be used,and so on. If we plot out the learning rates for this example out to 100 epochs you get thegraph below showing learning rate (y-axis) versus epoch (x-axis). Figure 17. 3: Drop Based Learning Rate Schedule We can implement this in Keras using the Learning Rate Schedulercallback4when fittingthe model. The Learning Rate Schedulercallback allows us to define a function to call thattakes the epoch number as an argument and returns the learning rate to use in stochasticgradient descent. When used, the learning rate specified by stochastic gradient descent isignored. In the code below, we use the same example before of a single hidden layer network onthe Ionosphere dataset. A newstepdecay()function is defined that implements the equation:Learning Rate=Initial Learning Rate⇥Drop Rateflo o r(1+Epoch Epoch Drop)Figure 17. 4: Calculate Learning Rate Using a Drop Schedule. 4http://keras. io/callbacks/
17. 4. Drop-Based Learning Rate Schedule113Where Initial Learning Rateis the learning rate at the beginning of the run,Epoch Dropis how often the learning rate is dropped in epochs and Drop Rateis how much to drop thelearning rate each time it is dropped. #Drop-Based Learning Rate Decayimportpandasimportpandasimportnumpyimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. optimizersimport SGDfromsklearn. preprocessingimport Label Encoderfromkeras. callbacksimport Learning Rate Scheduler#learningratescheduledefstep_decay(epoch):initial_lrate = 0. 1drop = 0. 5epochs_drop = 10. 0lrate = initial_lrate * math. pow(drop, math. floor((1+epoch)/epochs_drop))returnlrate#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("ionosphere. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:34]. astype(float)Y = dataset[:,34]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)Y = encoder. transform(Y)#createmodelmodel = Sequential()model. add(Dense(34, input_dim=34, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelsgd = SGD(lr=0. 0, momentum=0. 9, decay=0. 0, nesterov=False)model. compile(loss= binary_crossentropy, optimizer=sgd, metrics=[ accuracy ])#learningschedulecallbacklrate = Learning Rate Scheduler(step_decay)callbacks_list = [lrate]#Fitthemodelmodel. fit(X, Y, validation_split=0. 33, nb_epoch=50, batch_size=28,callbacks=callbacks_list, verbose=2)Listing 17. 6: Example of Drop-Based Learning Rate Decay. Running the example results in a classification accuracy of 99. 14% on the validation dataset,again an improvement over the baseline for the model on this dataset. 0s-loss: 0. 0546-acc: 0. 9830-val_loss: 0. 0705-val_acc: 0. 9914Epoch 46/50
17. 5. Tips for Using Learning Rate Schedules1140s-loss: 0. 0542-acc: 0. 9830-val_loss: 0. 0676-val_acc: 0. 9914Epoch 47/500s-loss: 0. 0538-acc: 0. 9830-val_loss: 0. 0668-val_acc: 0. 9914Epoch 48/500s-loss: 0. 0539-acc: 0. 9830-val_loss: 0. 0708-val_acc: 0. 9914Epoch 49/500s-loss: 0. 0539-acc: 0. 9830-val_loss: 0. 0674-val_acc: 0. 9914Epoch 50/500s-loss: 0. 0531-acc: 0. 9830-val_loss: 0. 0694-val_acc: 0. 9914Listing 17. 7: Sample Output of Time-Based Learning Rate Decay. 17. 5 Tips for Using Learning Rate Schedules This section lists some tips and tricks to consider when using learning rate schedules with neuralnetworks. Increase the initial learning rate. Because the learning rate will decrease, start withal a r g e rv a l u et od e c r e a s ef r o m. Al a r g e rl e a r n i n gr a t ew i l lr e s u l ti nal o tl a r g e rc h a n g e st othe weights, at least in the beginning, allowing you to benefit from fine tuning later. Use a large momentum. Using a larger momentum value will help the optimizationalgorithm to continue to make updates in the right direction when your learning rateshrinks to small values. Experiment with di↵erent schedules. It will not be clear which learning rate scheduleto use so try a few with di↵erent configuration options and see what works best on yourproblem. Also try schedules that change exponentially and even schedules that respond tothe accuracy of your model on the training or test datasets. 17. 6 Summary In this lesson you discovered learning rate schedules for training neural network models. Youlearned: The benefits of using learning rate schedules during training to lift model performance. How to configure and use a time-based learning rate schedule in Keras. How to develop your own drop-based learning rate schedule in Keras. 17. 6. 1 Next This concludes the lessons for Part IV. Now you know how to use more advanced features of Keras and more advanced techniques to get improved performance from your neural networkmodels. Next, in Part V, you will discover a new type of model called the convolutional neuralnetwork that is achieving state-of-the-art results in computer vision and natural languageprocessing problems.
Part VConvolutional Neural Networks 115
Chapter 18Crash Course In Convolutional Neural Networks Convolutional Neural Networks are a powerful artificial neural network technique. Thesenetworks preserve the spatial structure of the problem and were developed for object recognitiontasks such as handwritten digit recognition. They are popular because people are achievingstate-of-the-art results on dicult computer vision and natural language processing tasks. In thislesson you will discover Convolutional Neural Networks for deep learning, also called Conv Netsor CNNs. After completing this crash course you will know: The building blocks used in CNNs such as convolutional layers and pool layers. How the building blocks fit together with a short worked example. Best practices for configuring CNNs on your own object recognition tasks. Let's get started. 18. 1 The Case for Convolutional Neural Networks Given a dataset of gray scale images with the standardized size of 32⇥32 pixels each, atraditional feedforward neural network would require 1,024 input weights (plus one bias). Thisis fair enough, but the flattening of the image matrix of pixels to a long vector of pixel valueslooses all of the spatial structure in the image. Unless all of the images are perfectly resized,the neural network will have great diculty with the problem. Convolutional Neural Networks expect and preserve the spatial relationship between pixelsby learning internal feature representations using small squares of input data. Feature arelearned and used across the whole image, allowing for the objects in the images to be shifted ortranslated in the scene and still detectable by the network. It is this reason why the network isso useful for object recognition in photographs, picking out digits, faces, objects and so on withvarying orientation. In summary, below are some of the benefits of using convolutional neuralnetworks: They use fewer parameters (weights) to learn than a fully connected network. They are designed to be invariant to object position and distortion in the scene. They automatically learn and generalize features from the input domain. 116
18. 2. Building Blocks of Convolutional Neural Networks11718. 2 Building Blocks of Convolutional Neural Networks There are three types of layers in a Convolutional Neural Network:1. Convolutional Layers. 2. Pooling Layers. 3. Fully-Connected Layers. 18. 3 Convolutional Layers Convolutional layers are comprised of filters and feature maps. 18. 3. 1 Filters The filters are essentially theneuronsof the layer. They have both weighted inputs and generatean output value like a neuron. The input size is a fixed square called a patch or a receptivefield. If the convolutional layer is an input layer, then the input patch will be pixel values. Ifthey deeper in the network architecture, then the convolutional layer will take input from afeature map from the previous layer. 18. 3. 2 Feature Maps The feature map is the output of one filter applied to the previous layer. A given filter isdrawn across the entire previous layer, moved one pixel at a time. Each position results in anactivation of the neuron and the output is collected in the feature map. You can see that if thereceptive field is moved one pixel from activation to activation, then the field will overlap withthe previous activation by (field width-1) input values. The distance that filter is moved across the input from the previous layer each activation isreferred to as the stride. If the size of the previous layer is not cleanly divisible by the size ofthe filters receptive field and the size of the stride then it is possible for the receptive field toattempt to read o↵the edge of the input feature map. In this case, techniques like zero paddingcan be used to invent mock inputs with zero values for the receptive field to read. 18. 4 Pooling Layers The pooling layers down-sample the previous layers feature map. Pooling layers follow a sequenceof one or more convolutional layers and are intended to consolidate the features learned andexpressed in the previous layers feature map. As such, pooling may be consider a techniqueto compress or generalize feature representations and generally reduce the overfitting of thetraining data by the model. They too have a receptive field, often much smaller than the convolutional layer. Also, thestride or number of inputs that the receptive field is moved for each activation is often equal tothe size of the receptive field to avoid any overlap. Pooling layers are often very simple, takingthe average or the maximum of the input value in order to create its own feature map.
18. 5. Fully Connected Layers11818. 5 Fully Connected Layers Fully connected layers are the normal flat feedforward neural network layer. These layers mayhave a nonlinear activation function or a softmax activation in order to output probabilitiesof class predictions. Fully connected layers are used at the end of the network after featureextraction and consolidation has been performed by the convolutional and pooling layers. Theyare used to create final nonlinear combinations of features and for making predictions by thenetwork. 18. 6 Worked Example You now know about convolutional, pooling and fully connected layers. Let's make this moreconcrete by working through how these three layers may be connected together. 18. 6. 1 Image Input Data Let's assume we have a dataset of gray scale images. Each image has the same size of 32pixels wide and 32 pixels high, and pixel values are between 0 and 255, e. g. a matrix of32⇥32⇥1o r1, 0 2 4p i x e lv a l u e s. I m a g ei n p u td a t ai se x p r e s s e da sa3-d i m e n s i o n a lm a t r i xo fwidth⇥height⇥channels. If we were using color images in our example, we would have 3channels for the red, green and blue pixel values, e. g. 32⇥32⇥3. 18. 6. 2 Convolutional Layer We define a convolutional layer with 10 filters and a receptive field 5 pixels wide and 5 pixelshigh and a stride length of 1. Because each filter can only get input from (i. e. see)5⇥5( 2 5 )pixels at a time, we can calculate that each will require 25 + 1 input weights (plus 1 for the biasinput). Dragging the 5⇥5r e c e p t i v efi e l da c r o s st h ei n p u ti m a g ed a t aw i t has t r i d ew i d t ho f1will result in a feature map of 28⇥28 output values or 784 distinct activations per image. We have 10 filters, so that is 10 di↵erent 28⇥28 feature maps or 7,840 outputs that willbe created for one image. Finally, we know we have 26 inputs per filter, 10 filters and 28⇥28output values to calculate per filter, therefore we have a total of 26⇥10⇥28⇥28 or 203,840connections in our convolutional layer, we want to phrase it using traditional neural networknomenclature. Convolutional layers also make use of a nonlinear transfer function as part ofactivation and the rectifier activation function is the popular default to use. 18. 6. 3 Pool Layer We define a pooling layer with a receptive field with a width of 2 inputs and a height of 2 inputs. We also use a stride of 2 to ensure that there is no overlap. This results in feature maps thatare one half the size of the input feature maps. From 10 di↵erent 28⇥28 feature maps as inputto 10 di↵erent 14⇥14 feature maps as output. We will use amax()operation for each receptivefield so that the activation is the maximum input value.
18. 7. Convolutional Neural Networks Best Practices11918. 6. 4 Fully Connected Layer Finally, we can flatten out the square feature maps into a traditional flat fully connected layer. We can define the fully connected layer with 200 hidden neurons, each with 10⇥14⇥14 inputconnections, or 1,960 + 1 weights per neuron. That is a total of 392,200 connections and weightsto learn in this layer. We can use a sigmoid or softmax transfer function to output probabilitiesof class values directly. 18. 7 Convolutional Neural Networks Best Practices Now that we know about the building blocks for a convolutional neural network and how thelayers hang together, we can review some best practices to consider when applying them. Input Receptive Field Dimensions: The default is 2D for images, but could be 1Dsuch as for words in a sentence or 3D for video that adds a time dimension. Receptive Field Size: The patch should be as small as possible, but large enough toseefeatures in the input data. It is common to use 3⇥3o ns m a l li m a g e sa n d5⇥5o r7⇥7a n dm o r eo nl a r g e ri m a g es i z e s. Stride Width: Use the default stride of 1. It is easy to understand and you don't needpadding to handle the receptive field falling o↵the edge of your images. This could beincreased to 2 or larger for larger images. Number of Filters: Filters are the feature detectors. Generally fewer filters are used atthe input layer and increasingly more filters used at deeper layers. Padding: Set to zero and called zero padding when reading non-input data. This isuseful when you cannot or do not want to standardize input image sizes or when you wantto use receptive field and stride sizes that do not neatly divide up the input image size. Pooling: Pooling is a destructive or generalization process to reduce overfitting. Receptivefield size is almost always set to 2⇥2w i t has t r i d eo f2t od i s c a r d7 5 %o ft h ea c t i v a t i o n sfrom the output of the previous layer. Data Preparation: Consider standardizing input data, both the dimensions of theimages and pixel values. Pattern Architecture: It is common to pattern the layers in your network architecture. This might be one, two or some number of convolutional layers followed by a pooling layer. This structure can then be repeated one or more times. Finally, fully connected layers areoften only used at the output end and may be stacked one, two or more deep. Dropout: CNNs have a habit of overfitting, even with pooling layers. Dropout should beused such as between fully connected layers and perhaps after pooling layers.
18. 8. Summary12018. 8 Summary In this lesson you discovered convolutional neural networks. You learned: Why CNNs are needed to preserve spatial structure in your input data and the benefitsthey provide. The building blocks of CNN including convolutional, pooling and fully connected layers. How the layers in a CNN hang together. Best practices when applying CNN to your own problems. 18. 8. 1 Next You now know about convolutional neural networks. In the next section you will discover howto develop your first convolutional neural network in Keras for a handwriting digit recognitionproblem.
Chapter 19Project: Handwritten Digit Recognition A popular demonstration of the capability of deep learning techniques is object recognition inimage data. Thehello worldof object recognition for machine learning and deep learning isthe MNIST dataset for handwritten digit recognition. In this project you will discover howto develop a deep learning model to achieve near state-of-the-art performance on the MNISThandwritten digit recognition task in Python using the Keras deep learning library. Aftercompleting this step-by-step tutorial, you will know: How to load the MNIST dataset in Keras and develop a baseline neural network modelfor the problem. How to implement and evaluate a simple Convolutional Neural Network for MNIST. How to implement a close to state-of-the-art deep learning model for MNIST. Let's get started. Note:You may want to speed up the computation for this tutorial by using GPU ratherthan CPU hardware, such as the process described in Chapter5. This is a suggestion, not arequirement. The tutorial will work just fine on the CPU. 19. 1 Handwritten Digit Recognition Dataset The MNIST problem is a dataset developed by Yann Le Cun, Corinna Cortes and Christopher Burges for evaluating machine learning models on the handwritten digit classification problem1. The dataset was constructed from a number of scanned document datasets available from the National Institute of Standards and Technology (NIST). This is where the name for the datasetcomes from, as the Modified NIST or MNIST dataset. Images of digits were taken from a variety of scanned documents, normalized in size andcentered. This makes it an excellent dataset for evaluating models, allowing the developer tofocus on the machine learning with very little data cleaning or preparation required. Eachimage is a 28⇥28 pixel square (784 pixels total). A standard split of the dataset is used to1http://yann. lecun. com/exdb/mnist/121
19. 2. Loading the MNIST dataset in Keras122evaluate and compare models, where 60,000 images are used to train a model and a separate setof 10,000 images are used to test it. It is a digit recognition task. As such there are 10 digits (0 to 9) or 10 classes to predict. Results are reported using prediction error, which is nothing more than the inverted classificationaccuracy. Excellent results achieve a prediction error of less than 1%. State-of-the-art predictionerror of approximately 0. 2% can be achieved with large Convolutional Neural Networks. Thereis a listing of the state-of-the-art results and links to the relevant papers on the MNIST andother datasets on Rodrigo Benenson's webpage2. 19. 2 Loading the MNIST dataset in Keras The Keras deep learning library provides a convenience method for loading the MNIST dataset. The dataset is downloaded automatically the first time this function is called and is stored inyour home directory in~/. keras/datasets/mnist. pkl. gzas a 15 megabyte file. This is veryhandy for developing and testing deep learning models. To demonstrate how easy it is to loadthe MNIST dataset, we will first write a little script to download and visualize the first 4 imagesin the training dataset. #Plotadhocmnistinstancesfromkeras. datasetsimportmnistimportmatplotlib. pyplot as plt#load(downloadedifneeded)the MNISTdataset(X_train, y_train), (X_test, y_test) = mnist. load_data()#plot4imagesasgrayscaleplt. subplot(221)plt. imshow(X_train[0], cmap=plt. get_cmap( gray ))plt. subplot(222)plt. imshow(X_train[1], cmap=plt. get_cmap( gray ))plt. subplot(223)plt. imshow(X_train[2], cmap=plt. get_cmap( gray ))plt. subplot(224)plt. imshow(X_train[3], cmap=plt. get_cmap( gray ))#showtheplotplt. show()Listing 19. 1: Load the MNIST Dataset in Keras. You can see that downloading and loading the MNIST dataset is as easy as calling themnist. loaddata()function. Running the above example, you should see the image below. 2http://rodrigob. github. io/are_we_there_yet/build/classification_datasets_results. html
19. 3. Baseline Model with Multilayer Perceptrons123 Figure 19. 1: Examples from the MNIST dataset19. 3 Baseline Model with Multilayer Perceptrons Do we really need a complex model like a convolutional neural network to get the best resultswith MNIST? You can get good results using a very simple neural network model with a singlehidden layer. In this section we will create a simple Multilayer Perceptron model that achieves anerror rate of 1. 74%. We will use this as a baseline for comparison to more complex convolutionalneural network models. Let's start o↵by importing the classes and functions we will need. importnumpyfromkeras. datasetsimportmnistfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. utilsimportnp_utils Listing 19. 2: Import Classes and Functions. It is always a good idea to initialize the random number generator to a constant to ensurethat the results of your script are reproducible. #fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 19. 3: Initialize The Random Number Generator. Now we can load the MNIST dataset using the Keras helper function. #loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()Listing 19. 4: Load the MNIST Dataset.
19. 3. Baseline Model with Multilayer Perceptrons124The training dataset is structured as a 3-dimensional array of instance, image width andimage height. For a Multilayer Perceptron model we must reduce the images down into a vectorof pixels. In this case the 28⇥28 sized images will be 784 pixel input vectors. We can dothis transform easily using thereshape()function on the Num Py array. The pixel values areintegers, so we cast them to floating point values so that we can normalize them easily in thenext step. #flatten28*28imagestoa784vectorforeachimagenum_pixels = X_train. shape[1] * X_train. shape[2]X_train = X_train. reshape(X_train. shape[0], num_pixels). astype( float32 )X_test = X_test. reshape(X_test. shape[0], num_pixels). astype( float32 )Listing 19. 5: Prepare MNIST Dataset For Modeling. The pixel values are gray scale between 0 and 255. It is almost always a good idea toperform some scaling of input values when using neural network models. Because the scale iswell known and well behaved, we can very quickly normalize the pixel values to the range 0 and1b yd i v i d i n ge a c hv a l u eb yt h em a x i m u mo f2 5 5. #normalizeinputsfrom0-255to0-1X_train = X_train / 255X_test = X_test / 255Listing 19. 6: Normalize Pixel Values. Finally, the output variable is an integer from 0 to 9. This is a multiclass classificationproblem. As such, it is good practice to use a one hot encoding of the class values, transformingthe vector of class integers into a binary matrix. We can easily do this using the built-innputils. tocategorical()helper function in Keras. #onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]Listing 19. 7: One Hot Encode The Output Variable. We are now ready to create our simple neural network model. We will define our model in afunction. This is handy if you want to extend the example later and try and get a better score. #definebaselinemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(num_pixels, input_dim=num_pixels, init= normal, activation= relu ))model. add(Dense(num_classes, init= normal, activation= softmax ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel Listing 19. 8: Define and Compile the Baseline Model. The model is a simple neural network with one hidden layer with the same number ofneurons as there are inputs (784). A rectifier activation function is used for the neurons in thehidden layer. A softmax activation function is used on the output layer to turn the outputsinto probability-like values and allow one class of the 10 to be selected as the model's output
19. 3. Baseline Model with Multilayer Perceptrons125prediction. Logarithmic loss is used as the loss function (calledcategoricalcrossentropyin Keras) and the ecient ADAM gradient descent algorithm is used to learn the weights. Asummary of the network structure is provided below: Figure 19. 2: Summary of Multilayer Perceptron Network Structure. We can now fit and evaluate the model. The model is fit over 10 epochs with updates every200 images. The test data is used as the validation dataset, allowing you to see the skill of themodel as it trains. A verbose value of 2 is used to reduce the output to one line for each trainingepoch. Finally, the test dataset is used to evaluate the model and a classification error rate isprinted. #buildthemodelmodel = baseline_model()#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200,verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Baseline Error:%. 2f%%"% (100-scores[1]*100))Listing 19. 9: Evaluate the Baseline Model. The full code listing is provided below for completeness. #Baseline MLPfor MNISTdatasetimportnumpyfromkeras. datasetsimportmnistfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. utilsimportnp_utils#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()
19. 3. Baseline Model with Multilayer Perceptrons126#flatten28*28imagestoa784vectorforeachimagenum_pixels = X_train. shape[1] * X_train. shape[2]X_train = X_train. reshape(X_train. shape[0], num_pixels). astype( float32 )X_test = X_test. reshape(X_test. shape[0], num_pixels). astype( float32 )#normalizeinputsfrom0-255to0-1X_train = X_train / 255X_test = X_test / 255#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]#definebaselinemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(num_pixels, input_dim=num_pixels, init= normal, activation= relu ))model. add(Dense(num_classes, init= normal, activation= softmax ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#buildthemodelmodel = baseline_model()#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200,verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Baseline Error:%. 2f%%"% (100-scores[1]*100))Listing 19. 10: Multilayer Perceptron Model for MNIST Problem. Running the example might take a few minutes when run on a CPU. You should see theoutput below. This simple network defined in very few lines of code achieves a respectable errorrate of 1. 73%. Train on 60000 samples, validate on 10000 samples Epoch 1/107s-loss: 0. 2791-acc: 0. 9203-val_loss: 0. 1420-val_acc: 0. 9579Epoch 2/107s-loss: 0. 1122-acc: 0. 9679-val_loss: 0. 0992-val_acc: 0. 9699Epoch 3/108s-loss: 0. 0724-acc: 0. 9790-val_loss: 0. 0784-val_acc: 0. 9745Epoch 4/108s-loss: 0. 0510-acc: 0. 9853-val_loss: 0. 0777-val_acc: 0. 9774Epoch 5/108s-loss: 0. 0367-acc: 0. 9898-val_loss: 0. 0628-val_acc: 0. 9792Epoch 6/108s-loss: 0. 0264-acc: 0. 9931-val_loss: 0. 0641-val_acc: 0. 9797Epoch 7/108s-loss: 0. 0185-acc: 0. 9958-val_loss: 0. 0604-val_acc: 0. 9810Epoch 8/1010s-loss: 0. 0148-acc: 0. 9967-val_loss: 0. 0621-val_acc: 0. 9811Epoch 9/108s-loss: 0. 0109-acc: 0. 9978-val_loss: 0. 0607-val_acc: 0. 9817Epoch 10/108s-loss: 0. 0073-acc: 0. 9987-val_loss: 0. 0595-val_acc: 0. 9827
19. 4. Simple Convolutional Neural Network for MNIST127Baseline Error: 1. 73%Listing 19. 11: Sample Output From Evaluating the Baseline Model. 19. 4 Simple Convolutional Neural Network for MNISTNow that we have seen how to load the MNIST dataset and train a simple Multilayer Perceptronmodel on it, it is time to develop a more sophisticated convolutional neural network or CNNmodel. Keras does provide a lot of capability for creating convolutional neural networks. In thissection we will create a simple CNN for MNIST that demonstrates how to use all of the aspectsof a modern CNN implementation, including Convolutional layers, Pooling layers and Dropoutlayers. The first step is to import the classes and functions needed. importnumpyfromkeras. datasetsimportmnistfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utils Listing 19. 12: Import classes and functions. Again, we always initialize the random number generator to a constant seed value forreproducibility of results. #fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 19. 13: Seed Random Number Generator. Next we need to load the MNIST dataset and reshape it so that it is suitable for use traininga CNN. In Keras, the layers used for two-dimensional convolutions expect pixel values with thedimensions[channels][width][height]. In the case of RGB, the first dimension channelswould be 3 for the red, green and blue components and it would be like having 3 image inputsfor every color image. In the case of MNIST where the channels values are gray scale, the pixeldimension is set to 1. #loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][channels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28). astype( float32 )X_test = X_test. reshape(X_test. shape[0], 1, 28, 28). astype( float32 )Listing 19. 14: Load Dataset and Separate Into Train and Test Sets. As before, it is a good idea to normalize the pixel values to the range 0 and 1 and one hotencode the output variable. #normalizeinputsfrom0-255to0-1X_train = X_train / 255
19. 4. Simple Convolutional Neural Network for MNIST128X_test = X_test / 255#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]Listing 19. 15: Normalize and One Hot Encode Data. Next we define our neural network model. Convolutional neural networks are more complexthan standard Multilayer Perceptrons, so we will start by using a simple structure to beginwith that uses all of the elements for state-of-the-art results. Below summarizes the networkarchitecture. 1. The first hidden layer is a convolutional layer called a Convolution2D. The layer has 32feature maps, which with the size of 5⇥5a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. T h i si st h einput layer, expecting images with the structure outline above. 2. Next we define a pooling layer that takes the maximum value called Max Pooling2D. It isconfigured with a pool size of 2⇥2. 3. The next layer is a regularization layer using dropout called Dropout. It is configured torandomly exclude 20% of neurons in the layer in order to reduce overfitting. 4. Next is a layer that converts the 2D matrix data to a vector called Flatten. It allows theoutput to be processed by standard fully connected layers. 5. Next a fully connected layer with 128 neurons and rectifier activation function is used. 6. Finally, the output layer has 10 neurons for the 10 classes and a softmax activation functionto output probability-like predictions for each class. As before, the model is trained using logarithmic loss and the ADAM gradient descentalgorithm. A depiction of the network structure is provided below.
19. 4. Simple Convolutional Neural Network for MNIST129 Figure 19. 3: Summary of Convolutional Neural Network Structure. defbaseline_model():#createmodelmodel = Sequential()model. add(Convolution2D(32, 5, 5, border_mode= valid, input_shape=(1, 28, 28),activation= relu ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Dropout(0. 2))model. add(Flatten())model. add(Dense(128, activation= relu ))model. add(Dense(num_classes, activation= softmax ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel Listing 19. 16: Define and Compile CNN Model. We evaluate the model the same way as before with the Multilayer Perceptron. The CNN isfit over 10 epochs with a batch size of 200. #buildthemodelmodel = baseline_model()#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200,verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)
19. 4. Simple Convolutional Neural Network for MNIST130print("CNNError:%. 2f%%"% (100-scores[1]*100))Listing 19. 17: Fit and Evaluate The CNN Model. The full code listing is provided below for completeness. #Simple CNNforthe MNISTDatasetimportnumpyfromkeras. datasetsimportmnistfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utilsfromkerasimportbackend as KK. set_image_dim_ordering( th )#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][channels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28). astype( float32 )X_test = X_test. reshape(X_test. shape[0], 1, 28, 28). astype( float32 )#normalizeinputsfrom0-255to0-1X_train = X_train / 255X_test = X_test / 255#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]#defineasimple CNNmodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Convolution2D(32, 5, 5, input_shape=(1, 28, 28), activation= relu ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Dropout(0. 2))model. add(Flatten())model. add(Dense(128, activation= relu ))model. add(Dense(num_classes, activation= softmax ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#buildthemodelmodel = baseline_model()#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200,verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("CNNError:%. 2f%%"% (100-scores[1]*100))Listing 19. 18: CNN Model for MNIST Problem.
19. 5. Larger Convolutional Neural Network for MNIST131Running the example, the accuracy on the training and validation test is printed each epochand at the end of the classification error rate is printed. Epochs may take 60 seconds to runon the CPU, or about 10 minutes in total depending on your hardware. You can see that thenetwork achieves an error rate of 1. 00, which is better than our simple Multilayer Perceptronmodel above. Train on 60000 samples, validate on 10000 samples Epoch 1/1052s-loss: 0. 2412-acc: 0. 9318-val_loss: 0. 0756-val_acc: 0. 9763Epoch 2/1052s-loss: 0. 0728-acc: 0. 9781-val_loss: 0. 0527-val_acc: 0. 9825Epoch 3/1053s-loss: 0. 0497-acc: 0. 9852-val_loss: 0. 0393-val_acc: 0. 9852Epoch 4/1053s-loss: 0. 0414-acc: 0. 9868-val_loss: 0. 0436-val_acc: 0. 9852Epoch 5/1053s-loss: 0. 0324-acc: 0. 9898-val_loss: 0. 0376-val_acc: 0. 9871Epoch 6/1053s-loss: 0. 0281-acc: 0. 9910-val_loss: 0. 0421-val_acc: 0. 9866Epoch 7/1052s-loss: 0. 0227-acc: 0. 9928-val_loss: 0. 0319-val_acc: 0. 9895Epoch 8/1052s-loss: 0. 0198-acc: 0. 9938-val_loss: 0. 0357-val_acc: 0. 9885Epoch 9/1052s-loss: 0. 0158-acc: 0. 9951-val_loss: 0. 0333-val_acc: 0. 9887Epoch 10/1052s-loss: 0. 0144-acc: 0. 9953-val_loss: 0. 0322-val_acc: 0. 9900CNN Error: 1. 00%Listing 19. 19: Sample Output From Evaluating the CNN Model. 19. 5 Larger Convolutional Neural Network for MNISTNow that we have seen how to create a simple CNN, let's take a look at a model capable of closeto state-of-the-art results. We import the classes and functions then load and prepare the datathe same as in the previous CNN example. This time we define a larger CNN architecture withadditional convolutional, max pooling layers and fully connected layers. The network topologycan be summarized as follows. 1. Convolutional layer with 30 feature maps of size 5⇥5. 2. Pooling layer taking the max over 2⇥2p a t c h e s. 3. Convolutional layer with 15 feature maps of size 3⇥3. 4. Pooling layer taking the max over 2⇥2p a t c h e s. 5. Dropout layer with a probability of 20%. 6. Flatten layer. 7. Fully connected layer with 128 neurons and rectifier activation.
19. 5. Larger Convolutional Neural Network for MNIST1328. Fully connected layer with 50 neurons and rectifier activation. 9. Output layer. A depictions of this larger network structure is provided below. Figure 19. 4: Summary of the Larger Convolutional Neural Network Structure. Like the previous two experiments, the model is fit over 10 epochs with a batch size of 200. #Larger CNNforthe MNISTDatasetimportnumpyfromkeras. datasetsimportmnistfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utilsfromkerasimportbackend as KK. set_image_dim_ordering( th )
19. 5. Larger Convolutional Neural Network for MNIST133#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28). astype( float32 )X_test = X_test. reshape(X_test. shape[0], 1, 28, 28). astype( float32 )#normalizeinputsfrom0-255to0-1X_train = X_train / 255X_test = X_test / 255#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]#definethelargermodeldeflarger_model():#createmodelmodel = Sequential()model. add(Convolution2D(30, 5, 5, input_shape=(1, 28, 28), activation= relu ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Convolution2D(15, 3, 3, activation= relu ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Dropout(0. 2))model. add(Flatten())model. add(Dense(128, activation= relu ))model. add(Dense(50, activation= relu ))model. add(Dense(num_classes, activation= softmax ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#buildthemodelmodel = larger_model()#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200,verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Large CNNError:%. 2f%%"% (100-scores[1]*100))Listing 19. 20: Larger CNN for the MNIST Problem. Running the example prints accuracy on the training and validation datasets each epoch andafi n a lc l a s s i fi c a t i o ne r r o rr a t e. T h em o d e lt a k e sa b o u t6 0s e c o n d st or u np e re p o c ho nam o d e r n CPU. This slightly larger model achieves the respectable classification error rate of 0. 83%. Train on 60000 samples, validate on 10000 samples Epoch 1/1059s-loss: 0. 3791-acc: 0. 8793-val_loss: 0. 0812-val_acc: 0. 9742Epoch 2/1059s-loss: 0. 0929-acc: 0. 9708-val_loss: 0. 0467-val_acc: 0. 9846Epoch 3/1059s-loss: 0. 0679-acc: 0. 9787-val_loss: 0. 0374-val_acc: 0. 9878Epoch 4/1059s-loss: 0. 0539-acc: 0. 9828-val_loss: 0. 0321-val_acc: 0. 9893Epoch 5/10
19. 6. Summary13459s-loss: 0. 0462-acc: 0. 9858-val_loss: 0. 0282-val_acc: 0. 9897Epoch 6/1060s-loss: 0. 0396-acc: 0. 9874-val_loss: 0. 0278-val_acc: 0. 9902Epoch 7/1060s-loss: 0. 0365-acc: 0. 9884-val_loss: 0. 0229-val_acc: 0. 9921Epoch 8/1059s-loss: 0. 0328-acc: 0. 9895-val_loss: 0. 0287-val_acc: 0. 9901Epoch 9/1060s-loss: 0. 0306-acc: 0. 9903-val_loss: 0. 0224-val_acc: 0. 9923Epoch 10/1059s-loss: 0. 0268-acc: 0. 9915-val_loss: 0. 0240-val_acc: 0. 9917Large CNN Error: 0. 83%Listing 19. 21: Sample Output From Evaluating the Larger CNN Model. This is not an optimized network topology. Nor is this a reproduction of a network topologyfrom a recent paper. There is a lot of opportunity for you to tune and improve upon this model. What is the best classification error rate you can achieve?19. 6 Summary In this lesson you discovered the MNIST handwritten digit recognition problem and deeplearning models developed in Python using the Keras library that are capable of achievingexcellent results. Working through this tutorial you learned: How to load the MNIST dataset in Keras and generate plots of the dataset. How to reshape the MNIST dataset and develop a simple but well performing Multilayer Perceptron model for the problem. How to use Keras to create convolutional neural network models for MNIST. How to develop and evaluate larger CNN models for MNIST capable of near world classresults. 19. 6. 1 Next You now know how to develop and improve convolutional neural network models in Keras. Apowerful technique for improving the performance of CNN models is to use data augmentation. In the next section you will discover the data augmentation API in Keras and how the di↵erentimage augmentation techniques a↵ect the MNIST images.
Chapter 20Improve Model Performance With Image Augmentation Data preparation is required when working with neural network and deep learning models. Increasingly data augmentation is also required on more complex object recognition tasks. Inthis lesson you will discover how to use data preparation and data augmentation with yourimage datasets when developing and evaluating deep learning models in Python with Keras. After completing this lesson, you will know: About the image augmentation API provide by Keras and how to use it with your models. How to perform feature standardization. How to perform ZCA whitening of your images. How to augment data with random rotations, shifts and flips of images. How to save augmented image data to disk. Let's get started. 20. 1 Keras Image Augmentation APILike the rest of Keras, the image augmentation API is simple and powerful. Keras providesthe Image Data Generatorclass that defines the configuration for image data preparation andaugmentation. This includes capabilities such as: Feature-wise standardization. ZCA whitening. Random rotation, shifts, shear and flips. Dimension reordering. Save augmented images to disk. 135
20. 2. Point of Comparison for Image Augmentation136An augmented image generator can be created as follows:datagen = Image Data Generator()Listing 20. 1: Create a Image Data Generator. Rather than performing the operations on your entire image dataset in memory, the API isdesigned to be iterated by the deep learning model fitting process, creating augmented imagedata for you just-in-time. This reduces your memory overhead, but adds some additional timecost during model training. After you have created and configured your Image Data Generator,you must fit it on your data. This will calculate any statistics required to actually performthe transforms to your image data. You can do this by calling thefit()function on the datagenerator and pass it your training dataset. datagen. fit(train)Listing 20. 2: Fit the Image Data Generator. The data generator itself is in fact an iterator, returning batches of image samples whenrequested. We can configure the batch size and prepare the data generator and get batches ofimages by calling theflow()function. X_batch, y_batch = datagen. flow(train, train, batch_size=32)Listing 20. 3: Configure the Batch Size for the Image Data Generator. Finally we can make use of the data generator. Instead of calling thefit()function onour model, we must call thefitgenerator()function and pass in the data generator and thedesired length of an epoch as well as the total number of epochs on which to train. fit_generator(datagen, samples_per_epoch=len(train), nb_epoch=100)Listing 20. 4: Fit a Model Using the Image Data Generator. You can learn more about the Keras image data generator API in the Keras documentation1. 20. 2 Point of Comparison for Image Augmentation Now that you know how the image augmentation API in Keras works, let's look at someexamples. We will use the MNIST handwritten digit recognition task in these examples (learnmore in Section19. 1). To begin, let's take a look at the first 9 images in the training dataset. #Plotofimagesasbaselineforcomparisonfromkeras. datasetsimportmnistfrommatplotlibimportpyplot#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_train[i], cmap=pyplot. get_cmap( gray ))#showtheplot1http://keras. io/preprocessing/image/
20. 3. Feature Standardization137pyplot. show()Listing 20. 5: Load and Plot the MNIST dataset. Running this example provides the following image that we can use as a point of comparisonwith the image preparation and augmentation tasks in the examples below. Figure 20. 1: Samples from the MNIST dataset. 20. 3 Feature Standardization It is also possible to standardize pixel values across the entire dataset. This is called feature stan-dardization and mirrors the type of standardization often performed for each column in a tabulardataset. This is di↵erent to sample standardization described in the previous section as pixel val-ues are standardized across all samples (all images in the dataset). In this case each image is con-sidered a feature. You can perform feature standardization by setting thefeaturewisecenterandfeaturewisestdnormalizationarguments on the Image Data Generatorclass. #Standardizeimagesacrossthedataset,mean=0,stdev=1fromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplot#loaddata
20. 3. Feature Standardization138(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationdatagen = Image Data Generator(featurewise_center=True, featurewise_std_normalization=True)#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesfor X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 6: Example of Feature Standardization. Running this example you can see that the e↵ect on the actual images, seemingly darkeningand lightening di↵erent digits.
20. 4. ZCA Whitening139 Figure 20. 2: Standardized Feature MNIST Images. 20. 4 ZCA Whitening A whitening transform of an image is a linear algebra operation that reduces the redundancyin the matrix of pixel images. Less redundancy in the image is intended to better highlightthe structures and features in the image to the learning algorithm. Typically, image whiteningis performed using the Principal Component Analysis (PCA) technique. More recently, analternative called ZCA (learn more in Appendix A of this tech report2)s h o w sb e t t e rr e s u l t sand results in transformed images that keeps all of the original dimensions and unlike PCA,resulting transformed images still look like their originals. You can perform a ZCA whiteningtransform by setting thezcawhiteningargument to True. #ZCAwhiteningfromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplot#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)2http://www. cs. toronto. edu/~kriz/learning-features-2009-TR. pdf
20. 4. ZCA Whitening140X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationdatagen = Image Data Generator(zca_whitening=True)#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesfor X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 7: Example of ZCA Whitening. Running the example, you can see the same general structure in the images and how theoutline of each digit has been highlighted. Figure 20. 3: ZCA Whitening of MNIST Images.
20. 5. Random Rotations14120. 5 Random Rotations Sometimes images in your sample data may have varying and di↵erent rotations in the scene. You can train your model to better handle rotations of images by artificially and randomlyrotating images from your dataset during training. The example below creates random rotationsof the MNIST digits up to 90 degrees by setting therotationrangeargument. #Random Rotationsfromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplot#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationdatagen = Image Data Generator(rotation_range=90)#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesfor X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 8: Example of Random Image Rotations. Running the example, you can see that images have been rotated left and right up to a limitof 90 degrees. This is not helpful on this problem because the MNIST digits have a normalizedorientation, but this transform might be of help when learning from photographs where theobjects may have di↵erent orientations.
20. 6. Random Shifts142 Figure 20. 4: Random Rotations of MNIST Images. 20. 6 Random Shifts Objects in your images may not be centered in the frame. They may be o↵-center in a varietyof di↵erent ways. You can train your deep learning network to expect and currently handleo↵-center objects by artificially creating shifted versions of your training data. Keras supportsseparate horizontal and vertical random shifting of training data by thewidthshiftrangeandheightshiftrangearguments. #Random Shiftsfromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplot#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationshift = 0. 2
20. 6. Random Shifts143datagen = Image Data Generator(width_shift_range=shift, height_shift_range=shift)#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesfor X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 9: Example of Random Image Shifts. Running this example creates shifted versions of the digits. Again, this is not required for MNIST as the handwritten digits are already centered, but you can see how this might be usefulon more complex problem domains. Figure 20. 5: Random Shifted MNIST Images.
20. 7. Random Flips14420. 7 Random Flips Another augmentation to your image data that can improve performance on large and complexproblems is to create random flips of images in your training data. Keras supports random flippingalong both the vertical and horizontal axes using theverticalflipandhorizontalfliparguments. #Random Flipsfromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplot#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationdatagen = Image Data Generator(horizontal_flip=True, vertical_flip=True)#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesfor X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 10: Example of Random Image Flips. Running this example you can see flipped digits. Flipping digits in MNIST is not useful asthey will always have the correct left and right orientation, but this may be useful for problemswith photographs of objects in a scene that can have a varied orientation.
20. 8. Saving Augmented Images to File145 Figure 20. 6: Randomly Flipped MNIST Images. 20. 8 Saving Augmented Images to File The data preparation and augmentation is performed just-in-time by Keras. This is ecient interms of memory, but you may require the exact images used during training. For example,perhaps you would like to use them with a di↵erent software package later or only generatethem once and use them on multiple di↵erent deep learning models or configurations. Keras allows you to save the images generated during training. The directory, filenameprefix and image file type can be specified to theflow()function before training. Then, duringtraining, the generated images will be written to file. The example below demonstrates this andwrites 9 images to aimagessubdirectory with the prefixaugand the file type of PNG. #Saveaugmentedimagestofilefromkeras. datasetsimportmnistfromkeras. preprocessing. imageimport Image Data Generatorfrommatplotlibimportpyplotimportosfromkerasimportbackend as KK. set_image_dim_ordering( th )#loaddata(X_train, y_train), (X_test, y_test) = mnist. load_data()#reshapetobe[samples][pixels][width][height]
20. 8. Saving Augmented Images to File146X_train = X_train. reshape(X_train. shape[0], 1, 28, 28)X_test = X_test. reshape(X_test. shape[0], 1, 28, 28)#convertfrominttofloat X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )#definedatapreparationdatagen = Image Data Generator()#fitparametersfromdatadatagen. fit(X_train)#configurebatchsizeandretrieveonebatchofimagesos. makedirs( images )for X_batch, y_batchindatagen. flow(X_train, y_train, batch_size=9, save_to_dir= images,save_prefix= aug, save_format= png ):#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(X_batch[i]. reshape(28, 28), cmap=pyplot. get_cmap( gray ))#showtheplotpyplot. show()break Listing 20. 11: Save Augmented Images To File. Running the example you can see that images are only written when they are generated. Figure 20. 7: Randomly Flipped MNIST Images.
20. 9. Tips For Augmenting Image Data with Keras14720. 9 Tips For Augmenting Image Data with Keras Image data is unique in that you can review the transformed copies of the data and quickly getan idea of how the model may be perceive it by your model. Below are some times for gettingthe most from image data preparation and augmentation for deep learning. Review Dataset. Take some time to review your dataset in great detail. Look at theimages. Take note of image preparation and augmentations that might benefit the trainingprocess of your model, such as the need to handle di↵erent shifts, rotations or flips ofobjects in the scene. Review Augmentations. Review sample images after the augmentation has beenperformed. It is one thing to intellectually know what image transforms you are using,it is a very di↵erent thing to look at examples. Review images both with individualaugmentations you are using as well as the full set of augmentations you plan to use inaggregate. You may see ways to simplify or further enhance your model training process. Evaluate a Suite of Transforms. Try more than one image data preparation andaugmentation scheme. Often you can be surprised by results of a data preparation schemeyou did not think would be beneficial. 20. 10 Summary In this lesson you discovered image data preparation and augmentation. You discovered a rangeof techniques that you can use easily in Python with Keras for deep learning models. Youlearned about: The Image Data Generator API in Keras for generating transformed images just-in-time. Feature-wise pixel standardization. The ZCA whitening transform. Random rotations, shifts and flips of images. How to save transformed images to file for later reuse. 20. 10. 1 Next You now know how to develop convolutional neural networks and use the image augmentation API in Keras. In the next chapter you will work through developing larger and deeper modelsfor a more complex object recognition task using Keras.
Chapter 21Project Object Recognition in Photographs Ad icult problem where traditional neural networks fall down is called object recognition. Itis where a model is able to identify objects in images. In this lesson you will discover how todevelop and evaluate deep learning models for object recognition in Keras. After completingthis step-by-step tutorial, you will know: About the CIFAR-10 object recognition dataset and how to load and use it in Keras. How to create a simple Convolutional Neural Network for object recognition. How to lift performance by creating deeper Convolutional Neural Networks. Let's get started. Note:You may want to speed up the computation for this tutorial by using GPU ratherthan CPU hardware, such as the process described in Chapter5. This is a suggestion, not arequirement. The tutorial will work just fine on the CPU. 21. 1 Photograph Object Recognition Dataset The problem of automatically identifying objects in photographs is dicult because of the nearinfinite number of permutations of objects, positions, lighting and so on. It's a really hardproblem. This is a well studied problem in computer vision and more recently an importantdemonstration of the capability of deep learning. A standard computer vision and deep learningdataset for this problem was developed by the Canadian Institute for Advanced Research(CIFAR). The CIFAR-10 dataset consists of 60,000 photos divided into 10 classes (hence the name CIFAR-10)1. Classes include common objects such as airplanes, automobiles, birds, cats and soon. The dataset is split in a standard way, where 50,000 images are used for training a modeland the remaining 10,000 for evaluating its performance. The photos are in color with red,green and blue channels, but are small measuring 32⇥32 pixel squares. 1http://www. cs. toronto. edu/~kriz/cifar. html148
21. 2. Loading The CIFAR-10 Dataset in Keras149State-of-the-art results can be achieved using very large convolutional neural networks. Youcan learn about state-of-the-art results on CIFAR-10 on Rodrigo Benenson's webpage2. Modelperformance is reported in classification accuracy, with very good performance above 90% withhuman performance on the problem at 94% and state-of-the-art results at 96% at the time ofwriting. 21. 2 Loading The CIFAR-10 Dataset in Keras The CIFAR-10 dataset can easily be loaded in Keras. Keras has the facility to automaticallydownload standard datasets like CIFAR-10 and store them in the~/. keras/datasetsdirectoryusing thecifar10. loaddata()function. This dataset is large at 163 megabytes, so it maytake a few minutes to download. Once downloaded, subsequent calls to the function will loadthe dataset ready for use. The dataset is stored as Python pickled training and test sets, ready for use in Keras. Eachimage is represented as a three dimensional matrix, with dimensions for red, green, blue, widthand height. We can plot images directly using the Matplotlib Python plotting library. #Plotadhoc CIFAR10instancesfromkeras. datasetsimportcifar10frommatplotlibimportpyplotfromscipy. miscimporttoimage#loaddata(X_train, y_train), (X_test, y_test) = cifar10. load_data()#createagridof3x3imagesforiinrange(0, 9):pyplot. subplot(330 + 1 + i)pyplot. imshow(toimage(X_train[i]))#showtheplotpyplot. show()Listing 21. 1: Load And Plot Sample CIFAR-10 Images. Running the code create a 3⇥3 plot of photographs. The images have been scaled up fromtheir small 32⇥32 size, but you can clearly see trucks horses and cars. You can also see somedistortion in the images that have been forced to the square aspect ratio. 2http://rodrigob. github. io/are_we_there_yet/build/classification_datasets_results. html
21. 3. Simple CNN for CIFAR-10150 Figure 21. 1: Small Sample of CIFAR-10 Images. 21. 3 Simple CNN for CIFAR-10The CIFAR-10 problem is best solved using a convolutional neural network (CNN). We canquickly start o↵by importing all of the classes and functions we will need in this example. #Simple CNNmodelfor CIFAR-10importnumpyfromkeras. datasetsimportcifar10fromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. constraintsimportmaxnormfromkeras. optimizersimport SGDfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utilsfromkerasimportbackend as KK. set_image_dim_ordering( th )Listing 21. 2: Load Classes and Functions. As is good practice, we next initialize the random number seed with a constant to ensurethe results are reproducible. #fixrandomseedforreproducibilityseed = 7
21. 3. Simple CNN for CIFAR-10151numpy. random. seed(seed)Listing 21. 3: Initialize Random Number Generator. Next we can load the CIFAR-10 dataset. #loaddata(X_train, y_train), (X_test, y_test) = cifar10. load_data()Listing 21. 4: Load the CIFAR-10 Dataset. The pixel values are in the range of 0 to 255 for each of the red, green and blue channels. Itis good practice to work with normalized data. Because the input values are well understood,we can easily normalize to the range 0 to 1 by dividing each value by the maximum observationwhich is 255. Note, the data is loaded as integers, so we must cast it to floating point values inorder to perform the division. #normalizeinputsfrom0-255to0. 0-1. 0X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )X_train = X_train / 255. 0X_test = X_test / 255. 0Listing 21. 5: Normalize the CIFAR-10 Dataset. The output variables are defined as a vector of integers from 0 to 1 for each class. We canuse a one hot encoding to transform them into a binary matrix in order to best model theclassification problem. We know there are 10 classes for this problem, so we can expect thebinary matrix to have a width of 10. #onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]Listing 21. 6: One Hot Encode The Output Variable. Let's start o↵by defining a simple CNN structure as a baseline and evaluate how well itperforms on the problem. We will use a structure with two convolutional layers followed bymax pooling and a flattening out of the network to fully connected layers to make predictions. Our baseline network structure can be summarized as follows:1. Convolutional input layer, 32 feature maps with a size of 3⇥3, a rectifier activationfunction and a weight constraint of max norm set to 3. 2. Dropout set to 20%. 3. Convolutional layer, 32 feature maps with a size of 3⇥3, a rectifier activation functionand a weight constraint of max norm set to 3. 4. Max Pool layer with the size 2⇥2. 5. Flatten layer. 6. Fully connected layer with 512 units and a rectifier activation function.
21. 3. Simple CNN for CIFAR-101527. Dropout set to 50%. 8. Fully connected output layer with 10 units and a softmax activation function. A logarithmic loss function is used with the stochastic gradient descent optimization algorithmconfigured with a large momentum and weight decay, starting with a learning rate of 0. 01. Avisualization of the network structure is provided below. Figure 21. 2: Summary of the Convolutional Neural Network Structure. #Createthemodelmodel = Sequential()model. add(Convolution2D(32, 3, 3, input_shape=(3, 32, 32), border_mode= same,activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 2))model. add(Convolution2D(32, 3, 3, activation= relu, border_mode= same,W_constraint=maxnorm(3)))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Flatten())model. add(Dense(512, activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 5))model. add(Dense(num_classes, activation= softmax ))#Compilemodelepochs = 25
21. 3. Simple CNN for CIFAR-10153lrate = 0. 01decay = lrate/epochssgd = SGD(lr=lrate, momentum=0. 9, decay=decay, nesterov=False)model. compile(loss= categorical_crossentropy, optimizer=sgd, metrics=[ accuracy ])print(model. summary())Listing 21. 7: Define and Compile the CNN Model. We fit this model with 25 epochs and a batch size of 32. A small number of epochs waschosen to help keep this tutorial moving. Normally the number of epochs would be one or twoorders of magnitude larger for this problem. Once the model is fit, we evaluate it on the testdataset and print out the classification accuracy. #Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=epochs,batch_size=32, verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 21. 8: Evaluate the Accuracy of the CNN Model. The full code listing is provided below for completeness. #Simple CNNmodelforthe CIFAR-10Datasetimportnumpyfromkeras. datasetsimportcifar10fromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. constraintsimportmaxnormfromkeras. optimizersimport SGDfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utilsfromkerasimportbackend as KK. set_image_dim_ordering( th )#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddata(X_train, y_train), (X_test, y_test) = cifar10. load_data()#normalizeinputsfrom0-255to0. 0-1. 0X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )X_train = X_train / 255. 0X_test = X_test / 255. 0#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]#Createthemodelmodel = Sequential()model. add(Convolution2D(32, 3, 3, input_shape=(3, 32, 32), border_mode= same,activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 2))
21. 4. Larger CNN for CIFAR-10154model. add(Convolution2D(32, 3, 3, activation= relu, border_mode= same,W_constraint=maxnorm(3)))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Flatten())model. add(Dense(512, activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 5))model. add(Dense(num_classes, activation= softmax ))#Compilemodelepochs = 25lrate = 0. 01decay = lrate/epochssgd = SGD(lr=lrate, momentum=0. 9, decay=decay, nesterov=False)model. compile(loss= categorical_crossentropy, optimizer=sgd, metrics=[ accuracy ])print(model. summary())#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=epochs,batch_size=32, verbose=2)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 21. 9: CNN Model for CIFAR-10 Problem. The classification accuracy and loss is printed each epoch on both the training and testdatasets. The model is evaluated on the test set and achieves an accuracy of 69. 76%, which isgood but not excellent. Epoch 21/2524s-loss: 0. 3583-acc: 0. 8721-val_loss: 1. 0083-val_acc: 0. 6936Epoch 22/2524s-loss: 0. 3360-acc: 0. 8818-val_loss: 0. 9994-val_acc: 0. 6991Epoch 23/2524s-loss: 0. 3215-acc: 0. 8854-val_loss: 1. 0237-val_acc: 0. 6994Epoch 24/2524s-loss: 0. 3040-acc: 0. 8930-val_loss: 1. 0289-val_acc: 0. 7016Epoch 25/2524s-loss: 0. 2879-acc: 0. 8990-val_loss: 1. 0600-val_acc: 0. 6976Accuracy: 69. 76%Listing 21. 10: Sample Output for the CNN Model. We can improve the accuracy significantly by creating a much deeper network. This is whatwe will look at in the next section. 21. 4 Larger CNN for CIFAR-10We have seen that a simple CNN performs poorly on this complex problem. In this sectionwe look at scaling up the size and complexity of our model. Let's design a deep version of thesimple CNN above. We can introduce an additional round of convolutions with many morefeature maps. We will use the same pattern of Convolutional, Dropout, Convolutional and Max Pooling layers. This pattern will be repeated 3 times with 32, 64, and 128 feature maps. The e↵ect will bean increasing number of feature maps with a smaller and smaller size given the max pooling
21. 4. Larger CNN for CIFAR-10155layers. Finally an additional and larger Dense layer will be used at the output end of thenetwork in an attempt to better translate the large number feature maps to class values. Wecan summarize a new network architecture as follows:1. Convolutional input layer, 32 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nfunction. 2. Dropout layer at 20%. 3. Convolutional layer, 32 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. 4. Max Pool layer with size 2⇥2. 5. Convolutional layer, 64 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. 6. Dropout layer at 20%. 7. Convolutional layer, 64 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. 8. Max Pool layer with size 2⇥2. 9. Convolutional layer, 128 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. 10. Dropout layer at 20%. 11. Convolutional layer, 128 feature maps with a size of 3⇥3a n dar e c t i fi e ra c t i v a t i o nf u n c t i o n. 12. Max Pool layer with size 2⇥2. 13. Flatten layer. 14. Dropout layer at 20%. 15. Fully connected layer with 1,024 units and a rectifier activation function. 16. Dropout layer at 20%. 17. Fully connected layer with 512 units and a rectifier activation function. 18. Dropout layer at 20%. 19. Fully connected output layer with 10 units and a softmax activation function. This is a larger network and a bit unwieldy to visualize. We can fit and evaluate this modelusing the same procedure above and the same number of epochs but a larger batch size of 64,found through some minor experimentation. #Large CNNmodelforthe CIFAR-10Datasetimportnumpyfromkeras. datasetsimportcifar10fromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport Flattenfromkeras. constraintsimportmaxnorm
21. 4. Larger CNN for CIFAR-10156fromkeras. optimizersimport SGDfromkeras. layers. convolutionalimport Convolution2Dfromkeras. layers. convolutionalimport Max Pooling2Dfromkeras. utilsimportnp_utilsfromkerasimportbackend as KK. set_image_dim_ordering( th )#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddata(X_train, y_train), (X_test, y_test) = cifar10. load_data()#normalizeinputsfrom0-255to0. 0-1. 0X_train = X_train. astype( float32 )X_test = X_test. astype( float32 )X_train = X_train / 255. 0X_test = X_test / 255. 0#onehotencodeoutputsy_train = np_utils. to_categorical(y_train)y_test = np_utils. to_categorical(y_test)num_classes = y_test. shape[1]#Createthemodelmodel = Sequential()model. add(Convolution2D(32, 3, 3, input_shape=(3, 32, 32), activation= relu,border_mode= same ))model. add(Dropout(0. 2))model. add(Convolution2D(32, 3, 3, activation= relu, border_mode= same ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Convolution2D(64, 3, 3, activation= relu, border_mode= same ))model. add(Dropout(0. 2))model. add(Convolution2D(64, 3, 3, activation= relu, border_mode= same ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Convolution2D(128, 3, 3, activation= relu, border_mode= same ))model. add(Dropout(0. 2))model. add(Convolution2D(128, 3, 3, activation= relu, border_mode= same ))model. add(Max Pooling2D(pool_size=(2, 2)))model. add(Flatten())model. add(Dropout(0. 2))model. add(Dense(1024, activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 2))model. add(Dense(512, activation= relu, W_constraint=maxnorm(3)))model. add(Dropout(0. 2))model. add(Dense(num_classes, activation= softmax ))#Compilemodelepochs = 25lrate = 0. 01decay = lrate/epochssgd = SGD(lr=lrate, momentum=0. 9, decay=decay, nesterov=False)model. compile(loss= categorical_crossentropy, optimizer=sgd, metrics=[ accuracy ])print(model. summary())#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=epochs,batch_size=64)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))
21. 5. Extensions To Improve Model Performance157Listing 21. 11: Large CNN Model for CIFAR-10 Problem. Running this example prints the classification accuracy and loss on the training and testdatasets each epoch. The estimate of classification accuracy for the final model is 78. 77% whichis nearly 10 points better than our simpler model. Epoch 21/2550000/50000 [==============================]-34s-loss: 0. 5154-acc: 0. 8150-val_loss:0. 6428-val_acc: 0. 7801Epoch 22/2550000/50000 [==============================]-34s-loss: 0. 4968-acc: 0. 8228-val_loss:0. 6338-val_acc: 0. 7854Epoch 23/2550000/50000 [==============================]-34s-loss: 0. 4851-acc: 0. 8283-val_loss:0. 6350-val_acc: 0. 7849Epoch 24/2550000/50000 [==============================]-34s-loss: 0. 4663-acc: 0. 8340-val_loss:0. 6376-val_acc: 0. 7823Epoch 25/2550000/50000 [==============================]-34s-loss: 0. 4544-acc: 0. 8389-val_loss:0. 6289-val_acc: 0. 7877Accuracy: 78. 77%Listing 21. 12: Sample Output for Fitting the Larger CNN Model. 21. 5 Extensions To Improve Model Performance We have achieved good results on this very dicult problem, but we are still a good way fromachieving world class results. Below are some ideas that you can try to extend upon the modeland improve model performance. Train for More Epochs. Each model was trained for a very small number of epochs,25. It is common to train large convolutional neural networks for hundreds or thousandsof epochs. I would expect that performance gains can be achieved by significantly raisingthe number of training epochs. Image Data Augmentation. The objects in the image vary in their position. Anotherboost in model performance can likely be achieved by using some data augmentation. Methods such as standardization and random shifts and horizontal image flips may bebeneficial. Deeper Network Topology. The larger network presented is deep, but larger networkscould be designed for the problem. This may involve more feature maps closer to theinput and perhaps less aggressive pooling. Additionally, standard convolutional networktopologies that have been shown useful may be adopted and evaluated on the problem. What accuracy can you achieve on this problem?
21. 6. Summary15821. 6 Summary In this lesson you discovered how to create deep learning models in Keras for object recognitionin photographs. After working through this tutorial you learned: About the CIFAR-10 dataset and how to load it in Keras and plot ad hoc examples fromthe dataset. How to train and evaluate a simple Convolutional Neural Network on the problem. How to expand a simple convolutional neural network into a deep convolutional neuralnetwork in order to boost performance on the dicult problem. How to use data augmentation to get a further boost on the dicult object recognitionproblem. 21. 6. 1 Next The CIFAR-10 dataset does present a dicult challenge and you now know how to developmuch larger convolutional neural networks. A clever feature of this type of network is that theycan be used to learn the spatial structure in other domains such as in sequences of words. In thenext chapter you will work through the application of a one-dimensional convolutional neuralnetwork to a natural language processing problem of sentiment classification.
Chapter 22Project: Predict Sentiment From Movie Reviews Sentiment analysis is a natural language processing problem where text is understood and theunderlying intent is predicted. In this lesson you will discover how you can predict the sentimentof movie reviews as either positive or negative in Python using the Keras deep learning library. After completing this step-by-step tutorial, you will know: About the IMDB sentiment analysis problem for natural language processing and how toload it in Keras. How to use word embedding in Keras for natural language problems. How to develop and evaluate a Multilayer Perceptron model for the IMDB problem. How to develop a one-dimensional convolutional neural network model for the IMDBproblem. Let's get started. 22. 1 Movie Review Sentiment Classification Dataset The dataset used in this project is the Large Movie Review Dataset often referred to as the IMDB dataset1. The Large Movie Review Dataset (often referred to as the IMDB dataset)contains 25,000 highly-polar movie reviews (good or bad) for training and the same amountagain for testing. The problem is to determine whether a given moving review has a positive ornegative sentiment. The data was collected by Stanford researchers and was used in a 2011 paper where a splitof 50-50 of the data was used for training and test2. An accuracy of 88. 89% was achieved. 1http://ai. stanford. edu/~amaas/data/sentiment/2http://ai. stanford. edu/~amaas/papers/wv Sent_acl2011. pdf159
22. 2. Load the IMDB Dataset With Keras16022. 2 Load the IMDB Dataset With Keras Keras provides access to the IMDB dataset built-in3. Theimdb. loaddata()function allowsyou to load the dataset in a format that is ready for use in neural network and deep learningmodels. The words have been replaced by integers that indicate the absolute popularity of theword in the dataset. The sentences in each review are therefore comprised of a sequence ofintegers. Callingimdb. loaddata()the first time will download the IMDB dataset to your computerand store it in your home directory under~/. keras/datasets/imdb. pklas a 32 megabyte file. Usefully, theimdb. loaddata()function provides additional arguments including the numberof top words to load (where words with a lower integer are marked as zero in the returned data),the number of top words to skip (to avoid thethe's) and the maximum length of reviews tosupport. Let's load the dataset and calculate some properties of it. We will start o↵by loadingsome libraries and loading the entire IMDB dataset as a training dataset. importnumpyfromkeras. datasetsimportimdbfrommatplotlibimportpyplot#loadthedataset(X_train, y_train), (X_test, y_test) = imdb. load_data()X = numpy. concatenate((X_train, X_test), axis=0)y = numpy. concatenate((y_train, y_test), axis=0)Listing 22. 1: Load the IMDB Dataset. Next we can display the shape of the training dataset. #summarizesizeprint("Trainingdata:")print(X. shape)print(y. shape)Listing 22. 2: Display The Shape of the IMDB Dataset. Running this snippet, we can see that there are 50,000 records. Training data:(50000,)(50000,)Listing 22. 3: Output of the Shape of the IMDB Dataset. We can also print the unique class values. #Summarizenumberofclassesprint("Classes:")print(numpy. unique(y))Listing 22. 4: Display The Classes in the IMDB Dataset. We can see that it is a binary classification problem for good and bad sentiment in thereview. Classes:[0 1]3http://keras. io/datasets/
22. 2. Load the IMDB Dataset With Keras161Listing 22. 5: Output of the Classes of the IMDB Dataset. Next we can get an idea of the total number of unique words in the dataset. #Summarizenumberofwordsprint("Numberofwords:")print(len(numpy. unique(numpy. hstack(X))))Listing 22. 6: Display The Number of Unique Words in the IMDB Dataset. Interestingly, we can see that there are just under 100,000 words across the entire dataset. Number of words:88585Listing 22. 7: Output of the Classes of Unique Words in the IMDB Dataset. Finally, we can get an idea of the average review length. #Summarizereviewlengthprint("Reviewlength:")result =map(len, X)print("Mean%. 2fwords(%f)"% (numpy. mean(result), numpy. std(result)))#plotreviewlengthasaboxplotandhistogrampyplot. subplot(121)pyplot. boxplot(result)pyplot. subplot(122)pyplot. hist(result)pyplot. show()Listing 22. 8: Plot the distribution of Review Lengths. We can see that the average review has just under 300 words with a standard deviation ofjust over 200 words. Review length:Mean 234. 76 words (172. 911495)Listing 22. 9: Output of the Distribution Summary for Review Length. The full code listing is provided below for completeness. #Loadand Plotthe IMDBdatasetimportnumpyfromkeras. datasetsimportimdbfrommatplotlibimportpyplot#loadthedataset(X_train, y_train), (X_test, y_test) = imdb. load_data()X = numpy. concatenate((X_train, X_test), axis=0)y = numpy. concatenate((y_train, y_test), axis=0)#summarizesizeprint("Trainingdata:")print(X. shape)print(y. shape)#Summarizenumberofclassesprint("Classes:")print(numpy. unique(y))#Summarizenumberofwords
22. 3. Word Embeddings162print("Numberofwords:")print(len(numpy. unique(numpy. hstack(X))))#Summarizereviewlengthprint("Reviewlength:")result =map(len, X)print("Mean%. 2fwords(%f)"% (numpy. mean(result), numpy. std(result)))#plotreviewlengthasaboxplotandhistogrampyplot. subplot(121)pyplot. boxplot(result)pyplot. subplot(122)pyplot. hist(result)pyplot. show()Listing 22. 10: Summarize the IMDB Dataset. Looking at the box and whisker plot and the histogram for the review lengths in words,we can probably see an exponential distribution that we can probably cover the mass of thedistribution with a clipped length of 400 to 500 words. Figure 22. 1: Review Length in Words for IMDB Dataset. 22. 3 Word Embeddings A recent breakthrough in the field of natural language processing is called word embedding. Thisis a technique where words are encoded as real-valued vectors in a high dimensional space, wherethe similarity between words in terms of meaning translates to closeness in the vector space.
22. 4. Simple Multilayer Perceptron Model163Discrete words are mapped to vectors of continuous numbers. This is useful when working withnatural language problems with neural networks as we require numbers as input values. Keras provides a convenient way to convert positive integer representations of words into aword embedding by an Embeddinglayer4. The layer takes arguments that define the mappingincluding the maximum number of expected words also called the vocabulary size (e. g. thelargest integer value that will be seen as an input). The layer also allows you to specify thedimensionality for each word vector, called the output dimension. We would like to use a word embedding representation for the IMDB dataset. Let's saythat we are only interested in the first 5,000 most used words in the dataset. Therefore ourvocabulary size will be 5,000. We can choose to use a 32-dimensional vector to represent eachword. Finally, we may choose to cap the maximum review length at 500 words, truncatingreviews longer than that and padding reviews shorter than that with 0 values. We would loadthe IMDB dataset as follows:imdb. load_data(nb_words=5000)Listing 22. 11: Only Load the Top 5,000 words in the IMDB Review. We would then use the Keras utility to truncate or pad the dataset to a length of 500 foreach observation using thesequence. padsequences()function. X_train = sequence. pad_sequences(X_train, maxlen=500)X_test = sequence. pad_sequences(X_test, maxlen=500)Listing 22. 12: Pad Reviews in the IMDB Dataset. Finally, later on, the first layer of our model would be an word embedding layer createdusing the Embeddingclass as follows:Embedding(5000, 32, input_length=500)Listing 22. 13: Define a Word Embedding Representation. The output of this first layer would be a matrix with the size 32⇥500 for a given moviereview training or test pattern in integer format. Now that we know how to load the IMDBdataset in Keras and how to use a word embedding representation for it, let's develop andevaluate some models. 22. 4 Simple Multilayer Perceptron Model We can start o↵by developing a simple Multilayer Perceptron model with a single hidden layer. The word embedding representation is a true innovation and we will demonstrate what wouldhave been considered world class results in 2011 with a relatively simple neural network. Let'sstart o↵by importing the classes and functions required for this model and initializing therandom number generator to a constant value to ensure we can easily reproduce the results. #MLPforthe IMDBproblemimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Dense4http://keras. io/layers/embeddings/
22. 4. Simple Multilayer Perceptron Model164fromkeras. layersimport Flattenfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 22. 14: Load Classes and Functions and Seed Random Number Generator. Next we will load the IMDB dataset. We will simplify the dataset as discussed during thesection on word embeddings. Only the top 5,000 words will be loaded. We will also use a50%/50% split of the dataset into training and test. This is a good standard split methodology. #loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000test_split = 0. 33(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)Listing 22. 15: Load and s Plit the IMDB Dataset. We will bound reviews at 500 words, truncating longer reviews and zero-padding shorterreviews. max_words = 500X_train = sequence. pad_sequences(X_train, maxlen=max_words)X_test = sequence. pad_sequences(X_test, maxlen=max_words)Listing 22. 16: Pad IMDB Reviews to a Fixed Length. Now we can create our model. We will use an Embeddinglayer as the input layer, settingthe vocabulary to 5,000, the word vector size to 32 dimensions and theinputlengthto 500. The output of this first layer will be a 32⇥500 sized matrix as discussed in the previous section. We will flatten the Embedding layers output to one dimension, then use one dense hidden layerof 250 units with a rectifier activation function. The output layer has one neuron and will use asigmoid activation to output values of 0 and 1 as predictions. The model uses logarithmic lossand is optimized using the ecient ADAM optimization procedure. #createthemodelmodel = Sequential()model. add(Embedding(top_words, 32, input_length=max_words))model. add(Flatten())model. add(Dense(250, activation= relu ))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())Listing 22. 17: Define a Multilayer Perceptron Model. We can fit the model and use the test set as validation while training. This model overfitsvery quickly so we will use very few training epochs, in this case just 2. There is a lot of data sowe will use a batch size of 128. After the model is trained, we evaluate it's accuracy on the testdataset. #Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=2, batch_size=128,verbose=1)#Finalevaluationofthemodel
22. 5. One-Dimensional Convolutional Neural Network165scores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 22. 18: Fit and Evaluate the Multilayer Perceptron Model. The full code listing is provided below for completeness. #MLPforthe IMDBproblemimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Flattenfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)max_words = 500X_train = sequence. pad_sequences(X_train, maxlen=max_words)X_test = sequence. pad_sequences(X_test, maxlen=max_words)#createthemodelmodel = Sequential()model. add(Embedding(top_words, 32, input_length=max_words))model. add(Flatten())model. add(Dense(250, activation= relu ))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=2, batch_size=128,verbose=1)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 22. 19: Multilayer Perceptron Model for the IMDB Dataset. Running this example fits the model and summarizes the estimated performance. We cansee that this very simple model achieves a score of nearly 86. 4% which is in the neighborhood ofthe original paper, with very little e↵ort. Accuracy: 87. 37%Listing 22. 20: Output from Evaluating the Multilayer Perceptron Model. I'm sure we can do better if we trained this network, perhaps using a larger embedding andadding more hidden layers. Let's try a di↵erent network type. 22. 5 One-Dimensional Convolutional Neural Network Convolutional neural networks were designed to honor the spatial structure in image data whilstbeing robust to the position and orientation of learned objects in the scene. This same principle
22. 5. One-Dimensional Convolutional Neural Network166can be used on sequences, such as the one-dimensional sequence of words in a movie review. The same properties that make the CNN model attractive for learning to recognize objects inimages can help to learn structure in paragraphs of words, namely the techniques invariance tothe specific position of features. Keras supports one dimensional convolutions and pooling by the Convolution1Dand Max Pooling1Dclasses respectively. Again, let's import the classes and functions needed for thisexample and initialize our random number generator to a constant value so that we can easilyreproduce results. #CNNforthe IMDBproblemimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Flattenfromkeras. layers. convolutionalimport Convolution1Dfromkeras. layers. convolutionalimport Max Pooling1Dfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 22. 21: Import Classes and Functions and Seed Random Number Generator. We can also load and prepare our IMDB dataset as we did before. #loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000test_split = 0. 33(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#paddatasettoamaximumreviewlengthinwordsmax_words = 500X_train = sequence. pad_sequences(X_train, maxlen=max_words)X_test = sequence. pad_sequences(X_test, maxlen=max_words)Listing 22. 22: Load, Split and Pad IMDB Dataset. We can now define our convolutional neural network model. This time, after the Embeddinginput layer, we insert a Convolution1Dlayer. This convolutional layer has 32 feature mapsand reads embedded word representations 3 vector elements of the word embedding at a time. The convolutional layer is followed by a Max Pooling1Dlayer with a length and stride of 2 thathalves the size of the feature maps from the convolutional layer. The rest of the network is thesame as the neural network above. #createthemodelmodel = Sequential()model. add(Embedding(top_words, 32, input_length=max_words))model. add(Convolution1D(nb_filter=32, filter_length=3, border_mode= same,activation= relu ))model. add(Max Pooling1D(pool_length=2))model. add(Flatten())model. add(Dense(250, activation= relu ))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())
22. 5. One-Dimensional Convolutional Neural Network167Listing 22. 23: Define the CNN Model. We also fit the network the same as before. #Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=2, batch_size=128,verbose=1)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 22. 24: Fit and Evaluate the CNN Model. Running the example, we are first presented with a summary of the network structure (notshown here). We can see our convolutional layer preserves the dimensionality of our Embeddinginput layer of 32 dimensional input with a maximum of 500 words. The pooling layer compressesthis representation by halving it. Running the example o↵ers a small but welcome improvementover the neural network model above with an accuracy of nearly 88. 3%. The full code listing is provided below for completeness. #CNNforthe IMDBproblemimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Flattenfromkeras. layers. convolutionalimport Convolution1Dfromkeras. layers. convolutionalimport Max Pooling1Dfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#paddatasettoamaximumreviewlengthinwordsmax_words = 500X_train = sequence. pad_sequences(X_train, maxlen=max_words)X_test = sequence. pad_sequences(X_test, maxlen=max_words)#createthemodelmodel = Sequential()model. add(Embedding(top_words, 32, input_length=max_words))model. add(Convolution1D(nb_filter=32, filter_length=3, border_mode= same,activation= relu ))model. add(Max Pooling1D(pool_length=2))model. add(Flatten())model. add(Dense(250, activation= relu ))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=2, batch_size=128,verbose=1)#Finalevaluationofthemodel
22. 6. Summary168scores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 22. 25: CNN Model for the IMDB Dataset. Accuracy: 88. 28%Listing 22. 26: Output from Evaluating the CNN Model. Again, there is a lot of opportunity for further optimization, such as the use of deeper and/orlarger convolutional layers. One interesting idea is to set the max pooling layer to use an inputlength of 500. This would compress each feature map to a single 32 length vector and mayboost performance. 22. 6 Summary In this lesson you discovered the IMDB sentiment analysis dataset for natural language processing. You learned how to develop deep learning models for sentiment analysis including: How to load and review the IMDB dataset within Keras. How to develop a large neural network model for sentiment analysis. How to develop a one-dimensional convolutional neural network model for sentimentanalysis. This tutorial concludes Part V and your introduction to convolutional neural networks in Keras. Next in Part VI we will discover a di↵erent type of neural network intended to learn andpredict sequences called recurrent neural networks.
Part VIRecurrent Neural Networks 169
Chapter 23Crash Course In Recurrent Neural Networks There is another type of neural network that is dominating dicult machine learning problemsthat involve sequences of inputs called recurrent neural networks. Recurrent neural networkshave connections that have loops, adding feedback and memory to the networks over time. Thismemory allows this type of network to learn and generalize across sequences of inputs ratherthan individual patterns. A powerful type of Recurrent Neural Network called the Long Short-Term Memory Networkhas been shown to be particularly e↵ective when stacked into a deep configuration, achievingstate-of-the-art results on a diverse array of problems from language translation to automaticcaptioning of images and videos. In this lesson you will get a crash course in recurrent neuralnetworks for deep learning, acquiring just enough understanding to start using LSTM networksin Python with Keras. After reading this lesson, you will know: The limitations of Multilayer Perceptrons that are addressed by recurrent neural networks. The problems that must be addressed to make Recurrent Neural networks useful. The details of the Long Short-Term Memory networks used in applied deep learning. Let's get started. 23. 1 Support For Sequences in Neural Networks There are some problem types that are best framed involving either a sequence as an input oran output. For example, consider a univariate time series problem, like the price of a stock overtime. This dataset can be framed as a prediction problem for a classical feedforward Multilayer Perceptron network by defining a windows size (e. g. 5) and training the network to learn tomake short term predictions from the fixed sized window of inputs. This would work, but is very limited. The window of inputs adds memory to the problem,but is limited to just a fixed number of points and must be chosen with sucient knowledgeof the problem. A naive window would not capture the broader trends over minutes, hoursand days that might be relevant to making a prediction. From one prediction to the next, thenetwork only knows about the specific inputs it is provided. Univariate time series prediction170
23. 2. Recurrent Neural Networks171is important, but there are even more interesting problems that involve sequences. Considerthe following taxonomy of sequence problems that require a mapping of an input to an output(taken from Andrej Karpathy1). One-to-Many: sequence output, for image captioning. Many-to-One: sequence input, for sentiment classification. Many-to-Many: sequence in and out, for machine translation. Synchronized Many-to-Many: synced sequences in and out, for video classification. We can also see that a one-to-one example of input to output would be an example of aclassical feedforward neural network for a prediction task like image classification. Support forsequences in neural networks is an important class of problem and one where deep learning hasrecently shown impressive results State-of-the art results have been using a type of networkspecifically designed for sequence problems called recurrent neural networks. 23. 2 Recurrent Neural Networks Recurrent Neural Networks or RNNs are a special type of neural network designed for sequenceproblems. Given a standard feedforward Multilayer Perceptron network, a recurrent neuralnetwork can be thought of as the addition of loops to the architecture. For example, in a givenlayer, each neuron may pass its signal latterly (sideways) in addition to forward to the nextlayer. The output of the network may feedback as an input to the network with the next inputvector. And so on. The recurrent connections add state or memory to the network and allow it to learn broaderabstractions from the input sequences. The field of recurrent neural networks is well establishedwith popular methods. For the techniques to be e↵ective on real problems, two major issuesneeded to be resolved for the network to be useful. 1. How to train the network with Back propagation. 2. How to stop gradients vanishing or exploding during training. 23. 2. 1 How to Train Recurrent Neural Networks The staple technique for training feedforward neural networks is to back propagate error andupdate the network weights. Back propagation breaks down in a recurrent neural network,because of the recurrent or loop connections. This was addressed with a modification of the Back propagation technique called Back propagation Through Time or BPTT. Instead of performing back propagation on the recurrent network as stated, the structure ofthe network is unrolled, where copies of the neurons that have recurrent connections are created. For example a single neuron with a connection to itself (A!A)c o u l db er e p r e s e n t e da st w oneurons with the same weight values (A!B). This allows the cyclic graph of a recurrentneural network to be turned into an acyclic graph like a classic feedforward neural network, and Back propagation can be applied. 1http://karpathy. github. io/2015/05/21/rnn-effectiveness/
23. 3. Long Short-Term Memory Networks17223. 2. 2 How to Have Stable Gradients During Training When Back propagation is used in very deep neural networks and in unrolled recurrent neuralnetworks, the gradients that are calculated in order to update the weights can become unstable. They can become very large numbers called exploding gradients or very small numbers calledthe vanishing gradient problem. These large numbers in turn are used to update the weights inthe network, making training unstable and the network unreliable. This problem is alleviated in deep Multilayer Perceptron networks through the use of the Rectifier transfer function, and even more exotic but now less popular approaches of usingunsupervised pre-training of layers. In recurrent neural network architectures, this problem hasbeen alleviated using a new type of architecture called the Long Short-Term Memory Networksthat allows deep recurrent networks to be trained. 23. 3 Long Short-Term Memory Networks The Long Short-Term Memory or LSTM network is a recurrent neural network that is trainedusing Back propagation Through Time and overcomes the vanishing gradient problem. As suchit can be used to create large (stacked) recurrent networks, that in turn can be used to addressdicult sequence problems in machine learning and achieve state-of-the-art results. Instead ofneurons, LSTM networks have memory blocks that are connected into layers. A block has components that make it smarter than a classical neuron and a memory forrecent sequences. A block contains gates that manage the block's state and output. A unitoperates upon an input sequence and each gate within a unit uses the sigmoid activationfunction to control whether they are triggered or not, making the change of state and additionof information flowing through the unit conditional. There are three types of gates within amemory unit: Forget Gate: conditionally decides what information to discard from the unit. Input Gate: conditionally decides which values from the input to update the memorystate. Output Gate: conditionally decides what to output based on input and the memory ofthe unit. Each unit is like a mini state machine where the gates of the units have weights that arelearned during the training procedure. You can see how you may achieve a sophisticated learningand memory from a layer of LSTMs, and it is not hard to imagine how higher-order abstractionsmay be layered with multiple such layers. 23. 4 Summary In this lesson you discovered sequence problems and recurrent neural networks that can be usedto address them. Specifically, you learned: The limitations of classical feedforward neural networks and how recurrent neural networkscan overcome these problems.
23. 4. Summary173 The practical problems in training recurrent neural networks and how they are overcome. The Long Short-Term Memory network used to create deep recurrent neural networks. 23. 4. 1 Next In this lesson you took a crash course in recurrent neural networks. In the next lesson you willdiscover how you can tackle simple time series prediction problems with Multilayer Perceptronmodels, a precursor to working with recurrent networks.
Chapter 24Time Series Prediction with Multilayer Perceptrons Time Series prediction is a dicult problem both to frame and to address with machine learning. In this lesson you will discover how to develop neural network models for time series predictionin Python using the Keras deep learning library. After reading this lesson you will know: About the airline passengers univariate time series prediction problem. How to phrase time series prediction as a regression problem and develop a neural networkmodel for it. How to frame time series prediction with a time lag and develop a neural network modelfor it. Let's get started. 24. 1 Problem Description: Time Series Prediction The problem we are going to look at in this lesson is the international airline passengers predictionproblem. This is a problem where given a year and a month, the task is to predict the number ofinternational airline passengers in units of 1,000. The data ranges from January 1949 to December1960 or 12 years, with 144 observations. The dataset is available for free from the Data Marketwebpage as a CSV download1with the filenameinternational-airline-passengers. csv. Below is a sample of the first few lines of the file. "Month","Internationalairlinepassengers:monthlytotalsinthousands. Jan49?Dec60""1949-01",112"1949-02",118"1949-03",132"1949-04",129"1949-05",121Listing 24. 1: Sample Output From Evaluating the Baseline Model. 1https://goo. gl/fpw EXk174
24. 1. Problem Description: Time Series Prediction175We can load this dataset easily using the Pandas library. We are not interested in the date,given that each observation is separated by the same interval of one month. Therefore whenwe load the dataset we can exclude the first column. The downloaded dataset also has footerinformation that we can exclude with theskipfooterargument topandas. readcsv()set to3f o rt h e3f o o t e rl i n e s. O n c el o a d e dw ec a ne a s i l yp l o tt h ew h o l ed a t a s e t. T h ec o d et ol o a da n dplot the dataset is listed below. importpandasimportmatplotlib. pyplot as pltdataset = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)plt. plot(dataset)plt. show()Listing 24. 2: Load and Plot the Time Series dataset. You can see an upward trend in the plot. You can also see some periodicity to the datasetthat probably corresponds to the northern hemisphere summer holiday period. Figure 24. 1: Plot of the Airline Passengers Dataset We are going to keep things simple and work with the data as-is. Normally, it is a good ideato investigate various data preparation techniques to rescale the data and to make it stationary.
24. 2. Multilayer Perceptron Regression17624. 2 Multilayer Perceptron Regression We will phrase the time series prediction problem as a regression problem. That is, given thenumber of passengers (in units of thousands) this month, what is the number of passengers nextmonth. We can write a simple function to convert our single column of data into a two-columndataset. The first column containing this month's (t) passenger count and the second columncontaining next month's (t+1) passenger count, to be predicted. Before we get started, let'sfirst import all of the functions and classes we intend to use. importnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Dense#fixrandomseedforreproducibilitynumpy. random. seed(7)Listing 24. 3: Import Classes and Functions. We can also use the code from the previous section to load the dataset as a Pandas dataframe. We can then extract the Num Py array from the dataframe and convert the integer values tofloating point values which are more suitable for modeling with a neural network. #loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )Listing 24. 4: Load the Time Series Dataset. After we model our data and estimate the skill of our model on the training dataset, weneed to get an idea of the skill of the model on new unseen data. For a normal classificationor regression problem we would do this usingk-fold cross validation. With time series data,the sequence of values is important. A simple method that we can use is to split the ordereddataset into train and test datasets. The code below calculates the index of the split point andseparates the data into the training datasets with 67% of the observations that we can use totrain our model, leaving the remaining 33% for testing the model. #splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]print(len(train),len(test))Listing 24. 5: Split Dataset into Train and Test. Now we can define a function to create a new dataset as described above. The function takestwo arguments, the dataset which is a Num Py array that we want to convert into a dataset andthelookbackwhich is the number of previous time steps to use as input variables to predictthe next time period, in this case, defaulted to 1. This default will create a dataset where Xisthe number of passengers at a given time (t)a n d Yis the number of passengers at the nexttime (t+1). It can be configured and we will look at constructing a di↵erently shaped datasetin the next section.
24. 2. Multilayer Perceptron Regression177#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)Listing 24. 6: Function to Prepare Dataset For Modeling. Let's take a look at the e↵ect of this function on the first few rows of the dataset. XY112 118118 132132 129129 121121 135Listing 24. 7: Sample of the Prepared Dataset. If you compare these first 5 rows to the original dataset sample listed in the previous section,you can see the X=tand Y=t+1pattern in the numbers. Let's use this function to prepare thetrain and test datasets ready for modeling. #reshapeinto X=tand Y=t+1look_back = 1train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)Listing 24. 8: Call Function to Prepare Dataset For Modeling. We can now fit a Multilayer Perceptron model to the training data. We use a simple networkwith 1 input, 1 hidden layer with 8 neurons and an output layer. The model is fit using meansquared error, which if we take the square root gives us an error score in the units of the dataset. It r i e daf e wr o u g hp a r a m e t e r sa n ds e t t l e do nt h ec o n fi g u r a t i o nb e l o w,b u tb yn om e a n si st h enetwork listed optimized. #createandfit Multilayer Perceptronmodelmodel = Sequential()model. add(Dense(8, input_dim=look_back, activation= relu ))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=200, batch_size=2, verbose=2)Listing 24. 9: Define and Fit Multilayer Perceptron Model. Once the model is fit, we can estimate the performance of the model on the train and testdatasets. This will give us a point of comparison for new models. #Estimatemodelperformancetrain Score = model. evaluate(train X, train Y, verbose=0)print( Train Score:%. 2f MSE(%. 2f RMSE) % (train Score, math. sqrt(train Score)))test Score = model. evaluate(test X, test Y, verbose=0)print( Test Score:%. 2f MSE(%. 2f RMSE) % (test Score, math. sqrt(test Score)))Listing 24. 10: Evaluate the Fit Model.
24. 2. Multilayer Perceptron Regression178Finally, we can generate predictions using the model for both the train and test datasetto get a visual indication of the skill of the model. Because of how the dataset was prepared,we must shift the predictions so that they align on thex-axis with the original dataset. Onceprepared, the data is plotted, showing the original dataset in blue, the predictions for the traindataset in green the predictions on the unseen test dataset in red. #generatepredictionsfortrainingtrain Predict = model. predict(train X)test Predict = model. predict(test X)#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(dataset)plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 24. 11: Generate and Plot Predictions. We can see that the model did a pretty poor job of fitting both the training and the testdatasets. It basically predicted the same input value as the output.
24. 2. Multilayer Perceptron Regression179 Figure 24. 2: Number of Passengers Predicted Using a Simple Multilayer Perceptron Model. Blue=Whole Dataset, Green=Training, Red=Predictions. For completeness, below is the entire code listing. #Multilayer Perceptronto Predict International Airline Passengers(t+1,givent)importnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Dense#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]print(len(train),len(test))#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):
24. 2. Multilayer Perceptron Regression180data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#reshapeinto X=tand Y=t+1look_back = 1train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#createandfit Multilayer Perceptronmodelmodel = Sequential()model. add(Dense(8, input_dim=look_back, activation= relu ))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=200, batch_size=2, verbose=2)#Estimatemodelperformancetrain Score = model. evaluate(train X, train Y, verbose=0)print( Train Score:%. 2f MSE(%. 2f RMSE) % (train Score, math. sqrt(train Score)))test Score = model. evaluate(test X, test Y, verbose=0)print( Test Score:%. 2f MSE(%. 2f RMSE) % (test Score, math. sqrt(test Score)))#generatepredictionsfortrainingtrain Predict = model. predict(train X)test Predict = model. predict(test X)#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(dataset)plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 24. 12: Multilayer Perceptron Model for the t+1 Model. Running the model produces the following output....Epoch 195/2000s-loss: 551. 1626Epoch 196/2000s-loss: 542. 7755Epoch 197/2000s-loss: 539. 6731Epoch 198/2000s-loss: 539. 1133Epoch 199/2000s-loss: 539. 8144Epoch 200/2000s-loss: 539. 8541Train Score: 531. 45 MSE (23. 05 RMSE)Test Score: 2353. 35 MSE (48. 51 RMSE)
24. 3. Multilayer Perceptron Using the Window Method181Listing 24. 13: Sample output of the Multilayer Perceptron Model for the t+1 Model. Taking the square root of the performance estimates, we can see that the model has anaverage error of 23 passengers (in thousands) on the training dataset and 48 passengers (inthousands) on the test dataset. 24. 3 Multilayer Perceptron Using the Window Method We can also phrase the problem so that multiple recent time steps can be used to make theprediction for the next time step. This is called the window method, and the size of the windowis a parameter that can be tuned for each problem. For example, given the current time (t)w ewant to predict the value at the next time in the sequence (t+1), we can use the current time(t) as well as the two prior times (t-1andt-2). When phrased as a regression problem theinput variables aret-2,t-1,tand the output variable ist+1. Thecreatedataset()function we wrote in the previous section allows us to create thisformulation of the time series problem by increasing thelookbackargument from 1 to 3. Asample of the dataset with this formulation looks as follows:X1 X2 X3 Y112 118 132 129118 132 129 121132 129 121 135129 121 135 148121 135 148 148Listing 24. 14: Sample Dataset of the Window Formulation of the Problem. We can re-run the example in the previous section with the larger window size. The wholecode listing with just the window size change is listed below for completeness. #Multilayer Perceptronto Predict International Airline Passengers(t+1,givent,t-1,t-2)importnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Dense#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )
24. 3. Multilayer Perceptron Using the Window Method182#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]print(len(train),len(test))#reshapedatasetlook_back = 10train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#createandfit Multilayer Perceptronmodelmodel = Sequential()model. add(Dense(8, input_dim=look_back, activation= relu ))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=200, batch_size=2, verbose=2)#Estimatemodelperformancetrain Score = model. evaluate(train X, train Y, verbose=0)print( Train Score:%. 2f MSE(%. 2f RMSE) % (train Score, math. sqrt(train Score)))test Score = model. evaluate(test X, test Y, verbose=0)print( Test Score:%. 2f MSE(%. 2f RMSE) % (test Score, math. sqrt(test Score)))#generatepredictionsfortrainingtrain Predict = model. predict(train X)test Predict = model. predict(test X)#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(dataset)plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 24. 15: Multilayer Perceptron Model for the Window Model. Running the example provides the following output....Epoch 195/2000s-loss: 502. 7846Epoch 196/2000s-loss: 495. 8495Epoch 197/2000s-loss: 491. 8626Epoch 198/2000s-loss: 500. 8285Epoch 199/2000s-loss: 489. 7538Epoch 200/2000s-loss: 495. 5651Train Score: 509. 34 MSE (22. 57 RMSE)Test Score: 2277. 27 MSE (47. 72 RMSE)
24. 4. Summary183Listing 24. 16: Sample output of the Multilayer Perceptron Model for the Window Model. We can see that the error was reduced compared to that of the previous section. Again, thewindow size and the network architecture were not tuned, this is just a demonstration of howto frame a prediction problem. Taking the square root of the performance scores we can seethe average error on the training dataset was 22 passengers (in thousands per month) and theaverage error on the unseen test set was 47 passengers (in thousands per month). Figure 24. 3: Prediction of the Number of Passengers using a Simple Multilayer Perceptron Model With Time Lag. Blue=Whole Dataset, Green=Training, Red=Predictions. 24. 4 Summary In this lesson you discovered how to develop a neural network model for a time series predictionproblem using the Keras deep learning library. After working through this tutorial you nowknow: About the international airline passenger prediction time series dataset. How to frame time series prediction problems as a regression problems and develop aneural network model.
24. 4. Summary184 How use the window approach to frame a time series prediction problem and develop aneural network model. 24. 4. 1 Next In this lesson you learned how you can develop simple Multilayer Perceptron models for timesseries prediction. In the next lesson you will learn how we can build on this understanding anduse LSTM recurrent neural networks for time series prediction.
Chapter 25Time Series Prediction with LSTMRecurrent Neural Networks Time series prediction problems are a dicult type of predictive modeling problem. Unlikeregression predictive modeling, time series also adds the complexity of a sequence dependenceamong the input variables. A powerful type of neural network designed to handle sequencedependence are called recurrent neural networks. The Long Short-Term Memory Networks or LSTM network is a type of recurrent neural network used in deep learning because very largearchitectures can be successfully trained. In this lesson you will discover how to develop LSTM networks in Python using the Keras deeplearning library to address a demonstration time series prediction problem. After completingthis tutorial you will know how to implement and develop LSTM networks for your own timeseries prediction problems and other more general sequence problems. You will know: How to develop LSTM networks for a time series prediction problem framed as regression. How to develop LSTM networks for a time series prediction problem using a window forboth features and time steps. How to develop and make predictions using LSTM networks that maintain state (memory)across very long sequences. We will develop a number of LSTMs for a standard time series prediction problem. Theproblem and the chosen configuration for the LSTM networks are for demonstration purposesonly, they are not optimized. These examples will show you exactly how you can develop yourown LSTM networks for time series predictive modeling problems. Let's get started. 25. 1 LSTM Network For Regression The problem we are going to look at in this lesson is the international airline passengersprediction problem, described in Section24. 1. We can phrase the problem as a regressionproblem, as was done in the previous chapter. That is, given the number of passengers (in unitsof thousands) this month, what is the number of passengers next month. This example willreuse the same data loading and preparation from the previous chapter, specifically the use ofthecreatedataset()function. 185
25. 1. LSTM Network For Regression186LSTMs are sensitive to the scale of the input data, specifically when the sigmoid (default)or tanh activation functions are used. It can be a good practice to rescale the data to the rangeof 0-to-1, also called normalizing. We can easily normalize the dataset using the Min Max Scalerpreprocessing class from the scikit-learn library. #normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)Listing 25. 1: Normalize the Dataset. The LSTM network expects the input data (X) to be provided with a specific array structure inthe form of:[samples, time steps, features]. Our prepared data is in the form:[samples,features]and we are framing the problem as one time step for each sample. We can transformthe prepared train and test input data into the expected structure usingnumpy. reshape()asfollows:#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], 1, train X. shape[1]))test X = numpy. reshape(test X, (test X. shape[0], 1, test X. shape[1]))Listing 25. 2: Reshape the Prepared Dataset for the LSTM Layers. We are now ready to design and fit our LSTM network for this problem. The network has avisible layer with 1 input, a hidden layer with 4 LSTM blocks or neurons and an output layerthat makes a single value prediction. The default sigmoid activation function is used for the LSTM memory blocks. The network is trained for 100 epochs and a batch size of 1 is used. #createandfitthe LSTMnetworkmodel = Sequential()model. add(LSTM(4, input_dim=look_back))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=100, batch_size=1, verbose=2)Listing 25. 3: Define and Fit the LSTM network. For completeness, below is the entire code example. #LSTMforinternationalairlinepassengersproblemwithregressionframingimportnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromsklearn. preprocessingimport Min Max Scalerfromsklearn. metricsimportmean_squared_error#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)
25. 1. LSTM Network For Regression187#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]#reshapeinto X=tand Y=t+1look_back = 1train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], 1, train X. shape[1]))test X = numpy. reshape(test X, (test X. shape[0], 1, test X. shape[1]))#createandfitthe LSTMnetworkmodel = Sequential()model. add(LSTM(4, input_dim=look_back))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=100, batch_size=1, verbose=2)#makepredictionstrain Predict = model. predict(train X)test Predict = model. predict(test X)#invertpredictionstrain Predict = scaler. inverse_transform(train Predict)train Y = scaler. inverse_transform([train Y])test Predict = scaler. inverse_transform(test Predict)test Y = scaler. inverse_transform([test Y])#calculaterootmeansquarederrortrain Score = math. sqrt(mean_squared_error(train Y[0], train Predict[:,0]))print( Train Score:%. 2f RMSE % (train Score))test Score = math. sqrt(mean_squared_error(test Y[0], test Predict[:,0]))print( Test Score:%. 2f RMSE % (test Score))#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(scaler. inverse_transform(dataset))plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 25. 4: LSTM Model on the t+1 Problem.
25. 1. LSTM Network For Regression188Running the model produces the following output....Epoch 95/1001s-loss: 0. 0020Epoch 96/1001s-loss: 0. 0020Epoch 97/1001s-loss: 0. 0021Epoch 98/1001s-loss: 0. 0020Epoch 99/1001s-loss: 0. 0020Epoch 100/1001s-loss: 0. 0019Train Score: 22. 61 RMSETest Score: 51. 58 RMSEListing 25. 5: Sample Output From the LSTM Model on the t+1 Problem. We can see that the model did an OK job of fitting both the training and the test datasets. Figure 25. 1: LSTM Trained on Regression Formulation of Passenger Prediction Problem We can see that the model has an average error of about 23 passengers (in thousands) onthe training dataset and about 52 passengers (in thousands) on the test dataset. Not that bad.
25. 2. LSTM For Regression Using the Window Method18925. 2 LSTM For Regression Using the Window Method We can also phrase the problem so that multiple recent time steps can be used to make theprediction for the next time step. This is called a window and the size of the window is aparameter that can be tuned for each problem. For example, given the current time (t)w ew a n tto predict the value at the next time in the sequence (t+1), we can use the current time (t)as well as the two prior times (t-1andt-2) as input variables. When phrased as a regressionproblem the input variables aret-2, t-1,tand the output variable ist+1. Thecreatedataset()function we created in the previous section allows us to create thisformulation of the time series problem by increasing thelookbackargument from 1 to 3. Asample of the dataset with this formulation looks as follows:X1 X2 X3 Y112 118 132 129118 132 129 121132 129 121 135129 121 135 148121 135 148 148Listing 25. 6: Sample Dataset of the Window Formulation of the Problem. We can re-run the example in the previous section with the larger window size. The wholecode listing with just the window size change is listed below for completeness. #LSTMforinternationalairlinepassengersproblemwithwindowregressionframingimportnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromsklearn. preprocessingimport Min Max Scalerfromsklearn. metricsimportmean_squared_error#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_size
25. 2. LSTM For Regression Using the Window Method190train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]#reshapeinto X=tand Y=t+1look_back = 3train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], 1, train X. shape[1]))test X = numpy. reshape(test X, (test X. shape[0], 1, test X. shape[1]))#createandfitthe LSTMnetworkmodel = Sequential()model. add(LSTM(4, input_dim=look_back))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=100, batch_size=1, verbose=2)#makepredictionstrain Predict = model. predict(train X)test Predict = model. predict(test X)#invertpredictionstrain Predict = scaler. inverse_transform(train Predict)train Y = scaler. inverse_transform([train Y])test Predict = scaler. inverse_transform(test Predict)test Y = scaler. inverse_transform([test Y])#calculaterootmeansquarederrortrain Score = math. sqrt(mean_squared_error(train Y[0], train Predict[:,0]))print( Train Score:%. 2f RMSE % (train Score))test Score = math. sqrt(mean_squared_error(test Y[0], test Predict[:,0]))print( Test Score:%. 2f RMSE % (test Score))#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(scaler. inverse_transform(dataset))plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 25. 7: LSTM for the Window Model. Running the example provides the following output....Epoch 95/1001s-loss: 0. 0020Epoch 96/1001s-loss: 0. 0019Epoch 97/1001s-loss: 0. 0019Epoch 98/1001s-loss: 0. 0020Epoch 99/1001s-loss: 0. 0018Epoch 100/100