text
stringlengths
20
3.06k
25. 3. LSTM For Regression with Time Steps1911s-loss: 0. 0019Train Score: 22. 01 RMSETest Score: 68. 69 RMSEListing 25. 8: Sample Output From the LSTM Model on the Window Problem. We can see that the error was increased slightly compared to that of the previous section. The window size and the network architecture were not tuned, this is just a demonstration ofhow to frame a prediction problem. Figure 25. 2: LSTM Trained on Window Formulation of Passenger Prediction Problem. 25. 3 LSTM For Regression with Time Steps You may have noticed that the data preparation for the LSTM network includes time steps. Some sequence problems may have a varied number of time steps per sample. For example, youmay have measurements of a physical machine leading up to a point of failure or a point ofsurge. Each incident would be a sample, the observations that lead up to the event would bethe time steps and the variables observed would be the features. Time steps provides anotherway to phrase our time series problem. Like above in the window example, we can take priortime steps in our time series as inputs to predict the output at the next time step. Instead of phrasing the past observations as separate input features, we can use them astime steps of the one input feature, which is indeed a more accurate framing of the problem.
25. 3. LSTM For Regression with Time Steps192We can do this using the same data representation as in the previous window-based example,except when we reshape the data we set the columns to be the time steps dimension and changethe features dimension back to 1. For example:#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], train X. shape[1], 1))test X = numpy. reshape(test X, (test X. shape[0], test X. shape[1], 1))Listing 25. 9: Reshape the Prepared Dataset for the LSTM Layers with Time Steps. The entire code listing is provided below for completeness. #LSTMforinternationalairlinepassengersproblemwithtimestepregressionframingimportnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromsklearn. preprocessingimport Min Max Scalerfromsklearn. metricsimportmean_squared_error#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]#reshapeinto X=tand Y=t+1look_back = 3train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], train X. shape[1], 1))test X = numpy. reshape(test X, (test X. shape[0], test X. shape[1], 1))#createandfitthe LSTMnetworkmodel = Sequential()model. add(LSTM(4, input_dim=1))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )model. fit(train X, train Y, nb_epoch=100, batch_size=1, verbose=2)
25. 3. LSTM For Regression with Time Steps193#makepredictionstrain Predict = model. predict(train X)test Predict = model. predict(test X)#invertpredictionstrain Predict = scaler. inverse_transform(train Predict)train Y = scaler. inverse_transform([train Y])test Predict = scaler. inverse_transform(test Predict)test Y = scaler. inverse_transform([test Y])#calculaterootmeansquarederrortrain Score = math. sqrt(mean_squared_error(train Y[0], train Predict[:,0]))print( Train Score:%. 2f RMSE % (train Score))test Score = math. sqrt(mean_squared_error(test Y[0], test Predict[:,0]))print( Test Score:%. 2f RMSE % (test Score))#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(scaler. inverse_transform(dataset))plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 25. 10: LSTM for the Time Steps Model. Running the example provides the following output....Epoch 95/1001s-loss: 0. 0023Epoch 96/1001s-loss: 0. 0022Epoch 97/1001s-loss: 0. 0022Epoch 98/1001s-loss: 0. 0024Epoch 99/1001s-loss: 0. 0021Epoch 100/1001s-loss: 0. 0021Train Score: 23. 21 RMSETest Score: 57. 32 RMSEListing 25. 11: Sample Output From the LSTM Model on the Time Step Problem. We can see that the results are slightly better than previous example, although the structureof the input data makes a lot more sense.
25. 4. LSTM With Memory Between Batches194 Figure 25. 3: LSTM Trained on Time Step Formulation of Passenger Prediction Problem. 25. 4 LSTM With Memory Between Batches The LSTM network has memory which is capable of remembering across long sequences. Normally, the state within the network is reset after each training batch when fitting the model,as well as each call tomodel. predict()ormodel. evaluate(). We can gain finer control overwhen the internal state of the LSTM network is cleared in Keras by making the LSTM layerstateful. This means that it can build state over the entire training sequence and even maintainthat state if needed to make predictions. It requires that the training data not be shu✏ed when fitting the network. It also requiresexplicit resetting of the network state after each exposure to the training data (epoch) by callstomodel. resetstates(). This means that we must create our own outer loop of epochs andwithin each epoch callmodel. fit()andmodel. resetstates(), for example:foriinrange(100):model. fit(train X, train Y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)model. reset_states()Listing 25. 12: Manually Reset LSTM State Between Epochs. Finally, when the LSTM layer is constructed, thestatefulparameter must be set to Trueand instead of specifying the input dimensions, we must hard code the number of samples in a
25. 4. LSTM With Memory Between Batches195batch, number of time steps in a sample and number of features in a time step by setting thebatchinputshapeparameter. For example:model. add(LSTM(4, batch_input_shape=(batch_size, time_steps, features), stateful=True))Listing 25. 13: Set the Stateful Parameter on the LSTM Layer. This same batch size must then be used later when evaluating the model and makingpredictions. For example:model. predict(train X, batch_size=batch_size)Listing 25. 14: Set Batch Size when making Predictions. We can adapt the previous time step example to use a stateful LSTM. The full code listingis provided below. #LSTMforinternationalairlinepassengersproblemwithmemoryimportnumpyimportmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromsklearn. preprocessingimport Min Max Scalerfromsklearn. metricsimportmean_squared_error#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]#reshapeinto X=tand Y=t+1look_back = 3train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], train X. shape[1], 1))test X = numpy. reshape(test X, (test X. shape[0], test X. shape[1], 1))#createandfitthe LSTMnetwork
25. 4. LSTM With Memory Between Batches196batch_size = 1model = Sequential()model. add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )foriinrange(100):model. fit(train X, train Y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)model. reset_states()#makepredictionstrain Predict = model. predict(train X, batch_size=batch_size)model. reset_states()test Predict = model. predict(test X, batch_size=batch_size)#invertpredictionstrain Predict = scaler. inverse_transform(train Predict)train Y = scaler. inverse_transform([train Y])test Predict = scaler. inverse_transform(test Predict)test Y = scaler. inverse_transform([test Y])#calculaterootmeansquarederrortrain Score = math. sqrt(mean_squared_error(train Y[0], train Predict[:,0]))print( Train Score:%. 2f RMSE % (train Score))test Score = math. sqrt(mean_squared_error(test Y[0], test Predict[:,0]))print( Test Score:%. 2f RMSE % (test Score))#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(scaler. inverse_transform(dataset))plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 25. 15: LSTM for the Manual Management of State. Running the example provides the following output:... Epoch 1/11s-loss: 0. 0017Epoch 1/11s-loss: 0. 0017Epoch 1/11s-loss: 0. 0017Epoch 1/11s-loss: 0. 0017Epoch 1/11s-loss: 0. 0017Train Score: 22. 17 RMSETest Score: 71. 53 RMSEListing 25. 16: Sample Output From the Stateful LSTM Model.
25. 5. Stacked LSTMs With Memory Between Batches197We do see that results are worse. The model may need more modules and may need to betrained for more epochs to internalize the structure of thee problem. Figure 25. 4: Stateful LSTM Trained on Regression Formulation of Passenger Prediction Problem. 25. 5 Stacked LSTMs With Memory Between Batches Finally, we will take a look at one of the big benefits of LSTMs, the fact that they can besuccessfully trained when stacked into deep network architectures. LSTM networks can bestacked in Keras in the same way that other layer types can be stacked. One addition to theconfiguration that is required is that an LSTM layer prior to each subsequent LSTM layer mustreturn the sequence. This can be done by setting thereturnsequencesparameter on the layerto True. We can extend the stateful LSTM in the previous section to have two layers, as follows:model. add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True,return_sequences=True))model. add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))Listing 25. 17: Define a Stacked LSTM Model. The entire code listing is provided below for completeness. #Stacked LSTMforinternationalairlinepassengersproblemwithmemoryimportnumpy
25. 5. Stacked LSTMs With Memory Between Batches198importmatplotlib. pyplot as pltimportpandasimportmathfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromsklearn. preprocessingimport Min Max Scalerfromsklearn. metricsimportmean_squared_error#convertanarrayofvaluesintoadatasetmatrixdefcreate_dataset(dataset, look_back=1):data X, data Y = [], []foriinrange(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]data X. append(a)data Y. append(dataset[i + look_back, 0])returnnumpy. array(data X), numpy. array(data Y)#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetdataframe = pandas. read_csv( international-airline-passengers. csv, usecols=[1],engine= python, skipfooter=3)dataset = dataframe. valuesdataset = dataset. astype( float32 )#normalizethedatasetscaler = Min Max Scaler(feature_range=(0, 1))dataset = scaler. fit_transform(dataset)#splitintotrainandtestsetstrain_size =int(len(dataset) * 0. 67)test_size =len(dataset)-train_sizetrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]#reshapeinto X=tand Y=t+1look_back = 3train X, train Y = create_dataset(train, look_back)test X, test Y = create_dataset(test, look_back)#reshapeinputtobe[samples,timesteps,features]train X = numpy. reshape(train X, (train X. shape[0], train X. shape[1], 1))test X = numpy. reshape(test X, (test X. shape[0], test X. shape[1], 1))#createandfitthe LSTMnetworkbatch_size = 1model = Sequential()model. add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True,return_sequences=True))model. add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))model. add(Dense(1))model. compile(loss= mean_squared_error, optimizer= adam )foriinrange(100):model. fit(train X, train Y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)model. reset_states()#makepredictionstrain Predict = model. predict(train X, batch_size=batch_size)model. reset_states()test Predict = model. predict(test X, batch_size=batch_size)#invertpredictionstrain Predict = scaler. inverse_transform(train Predict)train Y = scaler. inverse_transform([train Y])test Predict = scaler. inverse_transform(test Predict)
25. 5. Stacked LSTMs With Memory Between Batches199test Y = scaler. inverse_transform([test Y])#calculaterootmeansquarederrortrain Score = math. sqrt(mean_squared_error(train Y[0], train Predict[:,0]))print( Train Score:%. 2f RMSE % (train Score))test Score = math. sqrt(mean_squared_error(test Y[0], test Predict[:,0]))print( Test Score:%. 2f RMSE % (test Score))#shifttrainpredictionsforplottingtrain Predict Plot = numpy. empty_like(dataset)train Predict Plot[:, :] = numpy. nantrain Predict Plot[look_back:len(train Predict)+look_back, :] = train Predict#shifttestpredictionsforplottingtest Predict Plot = numpy. empty_like(dataset)test Predict Plot[:, :] = numpy. nantest Predict Plot[len(train Predict)+(look_back*2)+1:len(dataset)-1, :] = test Predict#plotbaselineandpredictionsplt. plot(scaler. inverse_transform(dataset))plt. plot(train Predict Plot)plt. plot(test Predict Plot)plt. show()Listing 25. 18: Stacked LSTM Model. Running the example produces the following output....Epoch 1/14s-loss: 0. 0032Epoch 1/14s-loss: 0. 0030Epoch 1/14s-loss: 0. 0029Epoch 1/14s-loss: 0. 0027Epoch 1/14s-loss: 0. 0026Train Score: 26. 97 RMSETest Score: 89. 89 RMSEListing 25. 19: Sample Output From the Stacked LSTM Model. The predictions on the test dataset are again worse. This is more evidence to suggest theneed for additional training epochs.
25. 6. Summary200 Figure 25. 5: Stacked Stateful LSTMs Trained on Regression Formulation of Passenger Prediction Problem. 25. 6 Summary In this lesson you discovered how to develop LSTM recurrent neural networks for time seriesprediction in Python with the Keras deep learning network. Specifically, you learned:About the international airline passenger time series prediction problem. How to create an LSTM for a regression and a window formulation of the time seriesproblem. How to create an LSTM with a time step formulation of the time series problem. How to create an LSTM with state and stacked LSTMs with state to learn long sequences. 25. 6. 1 Next In this lesson you learned about how to use LSTM recurrent neural networks for time seriesprediction. Up next you will use your new found skill with LSTM networks to work on asequence classification problem.
Chapter 26Project: Sequence Classification of Movie Reviews Sequence classification is a predictive modeling problem where you have some sequence of inputsover space or time and the task is to predict a category for the sequence. What makes thisproblem dicult is that the sequences can vary in length, be comprised of a very large vocabularyof input symbols and may require the model to learn the long term context or dependenciesbetween symbols in the input sequence. In this project you will discover how you can develop LSTM recurrent neural network models for sequence classification problems in Python usingthe Keras deep learning library. After completing this project you will know: How to develop an LSTM model for a sequence classification problem. How to reduce overfitting in your LSTM models through the use of dropout. How to combine LSTM models with Convolutional Neural Networks that excel at learningspatial relationships. Let's get started. 26. 1 Simple LSTM for Sequence Classification The problem that we will use to demonstrate sequence learning in this tutorial is the IMDBmovie review sentiment classification problem, introduced in Section22. 1. We can quicklydevelop a small LSTM for the IMDB problem and achieve good accuracy. Let's start o↵byimporting the classes and functions required for this model and initializing the random numbergenerator to a constant value to ensure we can easily reproduce the results. importnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilitynumpy. random. seed(7)201
26. 1. Simple LSTM for Sequence Classification202Listing 26. 1: Import Classes and Functions and Seed Random Number Generator. We need to load the IMDB dataset. We are constraining the dataset to the top 5,000 words. We also split the dataset into train (50%) and test (50%) sets. #loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)Listing 26. 2: Load and Split the Dataset. Next, we need to truncate and pad the input sequences so that they are all the same lengthfor modeling. The model will learn the zero values carry no information so indeed the sequencesare not the same length in terms of content, but same length vectors is required to perform thecomputation in Keras. #truncateandpadinputsequencesmax_review_length = 500X_train = sequence. pad_sequences(X_train, maxlen=max_review_length)X_test = sequence. pad_sequences(X_test, maxlen=max_review_length)Listing 26. 3: Left-Pad Sequences to all be the Same Length. We can now define, compile and fit our LSTM model. The first layer is the Embedded layerthat uses 32 length vectors to represent each word. The next layer is the LSTM layer with 100memory units (smart neurons). Finally, because this is a classification problem we use a Denseoutput layer with a single neuron and a sigmoid activation function to make 0 or 1 predictionsfor the two classes (good and bad) in the problem. Because it is a binary classification problem,log loss is used as the loss function (binarycrossentropyin Keras). The ecient ADAMoptimization algorithm is used. The model is fit for only 2 epochs because it quickly overfitsthe problem. A large batch size of 64 reviews is used to space out weight updates. #createthemodelembedding_vecor_length = 32model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))model. add(LSTM(100))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())model. fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=3, batch_size=64)Listing 26. 4: Define and Fit LSTM Model for the IMDB Dataset. Once fit, we estimate the performance of the model on unseen reviews. #Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 26. 5: Evaluate the Fit Model. For completeness, here is the full code listing for this LSTM network on the IMDB dataset. #LSTMforsequenceclassificationinthe IMDBdatasetimportnumpy
26. 2. LSTM For Sequence Classification With Dropout203fromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#truncateandpadinputsequencesmax_review_length = 500X_train = sequence. pad_sequences(X_train, maxlen=max_review_length)X_test = sequence. pad_sequences(X_test, maxlen=max_review_length)#createthemodelembedding_vecor_length = 32model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))model. add(LSTM(100))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())model. fit(X_train, y_train, nb_epoch=3, batch_size=64)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 26. 6: Simple LSTM Model for the IMDB Dataset. Running this example produces the following output. Epoch 1/316750/16750 [==============================]-107s-loss: 0. 5570-acc: 0. 7149Epoch 2/316750/16750 [==============================]-107s-loss: 0. 3530-acc: 0. 8577Epoch 3/316750/16750 [==============================]-107s-loss: 0. 2559-acc: 0. 9019Accuracy: 86. 79%Listing 26. 7: Output from Running the Simple LSTM Model. You can see that this simple LSTM with little tuning achieves near state-of-the-art results onthe IMDB problem. Importantly, this is a template that you can use to apply LSTM networksto your own sequence classification problems. Now, let's look at some extensions of this simplemodel that you may also want to bring to your own problems. 26. 2 LSTM For Sequence Classification With Dropout Recurrent Neural networks like LSTM generally have the problem of overfitting. Dropout canbe applied between layers using the Dropout Keras layer. We can do this easily by adding new Dropout layers between the Embedding and LSTM layers and the LSTM and Dense outputlayers. We can also add dropout to the input on the Embedded layer by using the dropoutparameter. For example:
26. 2. LSTM For Sequence Classification With Dropout204model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length,dropout=0. 2))model. add(Dropout(0. 2))model. add(LSTM(100))model. add(Dropout(0. 2))model. add(Dense(1, activation= sigmoid ))Listing 26. 8: Define LSTM Model with Dropout Layers. The full code listing example above with the addition of Dropout layers is as follows:#LSTMwith Dropoutforsequenceclassificationinthe IMDBdatasetimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. layersimport Dropoutfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#truncateandpadinputsequencesmax_review_length = 500X_train = sequence. pad_sequences(X_train, maxlen=max_review_length)X_test = sequence. pad_sequences(X_test, maxlen=max_review_length)#createthemodelembedding_vecor_length = 32model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length,dropout=0. 2))model. add(Dropout(0. 2))model. add(LSTM(100))model. add(Dropout(0. 2))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())model. fit(X_train, y_train, nb_epoch=3, batch_size=64)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 26. 9: LSTM Model with Dropout Layers for the IMDB Dataset. Running this example provides the following output. Epoch 1/316750/16750 [==============================]-108s-loss: 0. 5802-acc: 0. 6898Epoch 2/316750/16750 [==============================]-108s-loss: 0. 4112-acc: 0. 8232Epoch 3/316750/16750 [==============================]-108s-loss: 0. 3825-acc: 0. 8365Accuracy: 85. 56%
26. 2. LSTM For Sequence Classification With Dropout205Listing 26. 10: Output from Running the LSTM Model with Dropout Layers. We can see dropout having the desired impact on training with a slightly slower trend inconvergence and in this case a lower final accuracy. The model could probably use a few moreepochs of training and may achieve a higher skill (try it an see). Alternately, dropout can beapplied to the input and recurrent connections of the memory units with the LSTM precisely andseparately. Keras provides this capability with parameters on the LSTM layer, thedropout Wfor configuring the input dropout anddropout Ufor configuring the recurrent dropout. Forexample, we can modify the first example to add dropout to the input and recurrent connectionsas follows:model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length,dropout=0. 2))model. add(LSTM(100, dropout_W=0. 2, dropout_U=0. 2))model. add(Dense(1, activation= sigmoid ))Listing 26. 11: Define LSTM Model with Dropout on Gates. The full code listing with more precise LSTM dropout is listed below for completeness. #LSTMwithdropoutforsequenceclassificationinthe IMDBdatasetimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#truncateandpadinputsequencesmax_review_length = 500X_train = sequence. pad_sequences(X_train, maxlen=max_review_length)X_test = sequence. pad_sequences(X_test, maxlen=max_review_length)#createthemodelembedding_vecor_length = 32model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length,dropout=0. 2))model. add(LSTM(100, dropout_W=0. 2, dropout_U=0. 2))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())model. fit(X_train, y_train, nb_epoch=3, batch_size=64)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 26. 12: LSTM Model with Dropout on Gates for the IMDB Dataset. Running this example provides the following output.
26. 3. LSTM and CNN For Sequence Classification206Epoch 1/316750/16750 [==============================]-112s-loss: 0. 6623-acc: 0. 5935Epoch 2/316750/16750 [==============================]-113s-loss: 0. 5159-acc: 0. 7484Epoch 3/316750/16750 [==============================]-113s-loss: 0. 4502-acc: 0. 7981Accuracy: 82. 82%Listing 26. 13: Output from Running the LSTM Model with Dropout on the Gates. We can see that the LSTM specific dropout has a more pronounced e↵ect on the convergenceof the network than the layer-wise dropout. As above, the number of epochs was kept constantand could be increased to see if the skill of the model can be further lifted. Dropout is a powerfultechnique for combating overfitting in your LSTM models and it is a good idea to try bothmethods, but you may bet better results with the gate-specific dropout provided in Keras. 26. 3 LSTM and CNN For Sequence Classification Convolutional neural networks excel at learning the spatial structure in input data. The IMDBreview data does have a one-dimensional spatial structure in the sequence of words in reviewsand the CNN may be able to pick out invariant features for good and bad sentiment. Thislearned spatial features may then be learned as sequences by an LSTM layer. We can easilyadd a one-dimensional CNN and max pooling layers after the Embedding layer which then feedthe consolidated features to the LSTM. We can use a smallish set of 32 features with a smallfilter length of 3. The pooling layer can use the standard length of 2 to halve the feature mapsize. For example, we would create the model as follows:model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))model. add(Convolution1D(nb_filter=32, filter_length=3, border_mode= same,activation= relu ))model. add(Max Pooling1D(pool_length=2))model. add(LSTM(100))model. add(Dense(1, activation= sigmoid ))Listing 26. 14: Define LSTM and CNN Model. The full code listing with a CNN and LSTM layers is listed below for completeness. #LSTMand CNNforsequenceclassificationinthe IMDBdatasetimportnumpyfromkeras. datasetsimportimdbfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. layers. convolutionalimport Convolution1Dfromkeras. layers. convolutionalimport Max Pooling1Dfromkeras. layers. embeddingsimport Embeddingfromkeras. preprocessingimportsequence#fixrandomseedforreproducibilitynumpy. random. seed(7)#loadthedatasetbutonlykeepthetopnwords,zerotheresttop_words = 5000
26. 4. Summary207(X_train, y_train), (X_test, y_test) = imdb. load_data(nb_words=top_words)#truncateandpadinputsequencesmax_review_length = 500X_train = sequence. pad_sequences(X_train, maxlen=max_review_length)X_test = sequence. pad_sequences(X_test, maxlen=max_review_length)#createthemodelembedding_vecor_length = 32model = Sequential()model. add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))model. add(Convolution1D(nb_filter=32, filter_length=3, border_mode= same,activation= relu ))model. add(Max Pooling1D(pool_length=2))model. add(LSTM(100))model. add(Dense(1, activation= sigmoid ))model. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])print(model. summary())model. fit(X_train, y_train, nb_epoch=3, batch_size=64)#Finalevaluationofthemodelscores = model. evaluate(X_test, y_test, verbose=0)print("Accuracy:%. 2f%%"% (scores[1]*100))Listing 26. 15: LSTM and CNN Model for the IMDB Dataset. Running this example provides the following output. Epoch 1/316750/16750 [==============================]-58s-loss: 0. 5186-acc: 0. 7263Epoch 2/316750/16750 [==============================]-58s-loss: 0. 2946-acc: 0. 8825Epoch 3/316750/16750 [==============================]-58s-loss: 0. 2291-acc: 0. 9126Accuracy: 86. 36%Listing 26. 16: Output from Running the LSTM and CNN Model. We can see that we achieve similar results to the first example although with less weights andfaster training time. I would expect that even better results could be achieved if this examplewas further extended to use dropout. 26. 4 Summary In this project you discovered how to develop LSTM network models for sequence classificationpredictive modeling problems. Specifically, you learned: How to develop a simple single layer LSTM model for the IMDB movie review sentimentclassification problem. How to extend your LSTM model with layer-wise and LSTM-specific dropout to reduceoverfitting. How to combine the spatial structure learning properties of a Convolutional Neural Networkwith the sequence learning of an LSTM.
26. 4. Summary20826. 4. 1 Next In completing this project you learned how you can use LSTM recurrent neural networksfor sequence classification. In the next lesson you will build upon your understanding of LSTM networks and better understand how the Keras API maintains state for simple sequenceprediction problems.
Chapter 27Understanding Stateful LSTMRecurrent Neural Networks A powerful and popular recurrent neural network is the long short-term model network or LSTM. It is widely used because the architecture overcomes the vanishing and exposing gradientproblem that plagues all recurrent neural networks, allowing very large and very deep networksto be created. Like other recurrent neural networks, LSTM networks maintain state, and thespecifics of how this is implemented in Keras framework can be confusing. In this lesson youwill discover exactly how state is maintained in LSTM networks by the Keras deep learninglibrary. After reading this lesson you will know: How to develop a naive LSTM network for a sequence prediction problem. How to carefully manage state through batches and features with an LSTM network. Hot to manually manage state in an LSTM network for stateful prediction. Let's get started. 27. 1 Problem Description: Learn the Alphabet In this tutorial we are going to develop and contrast a number of di↵erent LSTM recurrentneural network models. The context of these comparisons will be a simple sequence predictionproblem of learning the alphabet. That is, given a letter of the alphabet, predict the nextletter of the alphabet. This is a simple sequence prediction problem that once understood canbe generalized to other sequence prediction problems like time series prediction and sequenceclassification. Let's prepare the problem with some Python code that we can reuse from exampleto example. Firstly, let's import all of the classes and functions we plan to use in this tutorial. importnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utils Listing 27. 1: Import Classes and Functions. 209
27. 1. Problem Description: Learn the Alphabet210Next, we can seed the random number generator to ensure that the results are the sameeach time the code is executed. #fixrandomseedforreproducibilitynumpy. random. seed(7)Listing 27. 2: Seed the Random Number Generators. We can now define our dataset, the alphabet. We define the alphabet in uppercase charactersfor readability. Neural networks model numbers, so we need to map the letters of the alphabetto integer values. We can do this easily by creating a dictionary (map) of the letter index to thecharacter. We can also create a reverse lookup for converting predictions back into charactersto be used later. #definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))Listing 27. 3: Define the Alphabet Dataset. Now we need to create our input and output pairs on which to train our neural network. We can do this by defining an input sequence length, then reading sequences from the inputalphabet sequence. For example we use an input length of 1. Starting at the beginning of theraw input data, we can read o↵the first letter Aand the next letter as the prediction B. W emove along one character and repeat until we reach a prediction of Z. #preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 1data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])printseq_in,->, seq_out Listing 27. 4: Create Patterns from Dataset. We also print out the input pairs for sanity checking. Running the code to this point willproduce the following output, summarizing input sequences of length 1 and a single outputcharacter. A-> BB-> CC-> DD-> EE-> FF-> GG-> HH-> II-> JJ-> KK-> L
27. 2. LSTM for Learning One-Char to One-Char Mapping211L-> MM-> NN-> OO-> PP-> QQ-> RR-> SS-> TT-> UU-> VV-> WW-> XX-> YY-> ZListing 27. 5: Sample Alphabet Training Patterns. We need to reshape the Num Py array into a format expected by the LSTM networks, thatis[samples, time steps, features]. #reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), seq_length, 1))Listing 27. 6: Reshape Training Patterns for LSTM Layer. Once reshaped, we can then normalize the input integers to the range 0-to-1, the range ofthe sigmoid activation functions used by the LSTM network. #normalize X=X/float(len(alphabet))Listing 27. 7: Normalize Training Patterns. Finally, we can think of this problem as a sequence classification task, where each of the 26letters represents a di↵erent class. As such, we can convert the output (y) to a one hot encoding,using the Keras built-in functiontocategorical(). #onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)Listing 27. 8: One Hot Encode Output Patterns. We are now ready to fit di↵erent LSTM models. 27. 2 LSTM for Learning One-Char to One-Char Map-ping Let's start o↵by designing a simple LSTM to learn how to predict the next character in thealphabet given the context of just one character. We will frame the problem as a random collectionof one-letter input to one-letter output pairs. As we will see this is a dicult framing of theproblem for the LSTM to learn. Let's define an LSTM network with 32 units and a single outputneuron with a softmax activation function for making predictions. Because this is a multiclassclassification problem, we can use the log loss function (calledcategoricalcrossentropyin Keras), and optimize the network using the ADAM optimization function. The model is fit over500 epochs with a batch size of 1.
27. 2. LSTM for Learning One-Char to One-Char Mapping212#createandfitthemodelmodel = Sequential()model. add(LSTM(32, input_shape=(X. shape[1], X. shape[2])))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=500, batch_size=1, verbose=2)Listing 27. 9: Define and Fit LSTM Network Model. After we fit the model we can evaluate and summarize the performance on the entire trainingdataset. #summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))Listing 27. 10: Evaluate the Fit LSTM Network Model. We can then re-run the training data through the network and generate predictions, convertingboth the input and output pairs back into their original character format to get a visual idea ofhow well the network learned the problem. #demonstratesomemodelpredictionsforpatternindata X:x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 11: Make Predictions Using the Fit LSTM Network. The entire code listing is provided below for completeness. #Naive LSTMtolearnone-chartoone-charmappingimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utils#fixrandomseedforreproducibilitynumpy. random. seed(7)#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 1data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])
27. 2. LSTM for Learning One-Char to One-Char Mapping213data Y. append(char_to_int[seq_out])printseq_in,->, seq_out#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), seq_length, 1))#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelmodel = Sequential()model. add(LSTM(32, input_shape=(X. shape[1], X. shape[2])))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=500, batch_size=1, verbose=2)#summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsforpatternindata X:x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 12: LSTM Network for one-char to one-char Mapping. Running this example produces the following output. Model Accuracy: 84. 00%[ A ]-> B[ B ]-> C[ C ]-> D[ D ]-> E[ E ]-> F[ F ]-> G[ G ]-> H[ H ]-> I[ I ]-> J[ J ]-> K[ K ]-> L[ L ]-> M[ M ]-> N[ N ]-> O[ O ]-> P[ P ]-> Q[ Q ]-> R[ R ]-> S[ S ]-> T[ T ]-> U[ U ]-> W[ V ]-> Y[ W ]-> Z[ X ]-> Z[ Y ]-> Z
27. 3. LSTM for a Feature Window to One-Char Mapping214Listing 27. 13: Output from the one-char to one-char Mapping. We can see that this problem is indeed dicult for the network to learn. The reason is, thepoor LSTM units do not have any context to work with. Each input-output pattern is shown tothe network in a random order and the state of the network is reset after each pattern (eachbatch where each batch contains one pattern). This is abuse of the LSTM network architecture,treating it like a standard Multilayer Perceptron. Next, let's try a di↵erent framing of theproblem in order to provide more sequence to the network from which to learn. 27. 3 LSTM for a Feature Window to One-Char Mapping A popular approach to adding more context to data for Multilayer Perceptrons is to use thewindow method. This is where previous steps in the sequence are provided as additional inputfeatures to the network. We can try the same trick to provide more context to the LSTMnetwork. Here, we increase the sequence length from 1 to 3, for example:#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 3Listing 27. 14: Increase Sequence Length. Which creates training patterns like:ABC-> DBCD-> ECDE-> FListing 27. 15: Sample of Longer Input Sequence Length. Each element in the sequence is then provided as a new input feature to the network. Thisrequires a modification of how the input sequences reshaped in the data preparation step:#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), 1, seq_length))Listing 27. 16: Reshape Input so Sequence is Features. It also requires a modification for how the sample patterns are reshaped when demonstratingpredictions from the model. x = numpy. reshape(pattern, (1, 1,len(pattern)))Listing 27. 17: Reshape Input for Predictions so Sequence is Features. The entire code listing is provided below for completeness. #Naive LSTMtolearnthree-charwindowtoone-charmappingimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utils#fixrandomseedforreproducibilitynumpy. random. seed(7)
27. 3. LSTM for a Feature Window to One-Char Mapping215#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 3data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])printseq_in,->, seq_out#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), 1, seq_length))#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelmodel = Sequential()model. add(LSTM(32, input_shape=(X. shape[1], X. shape[2])))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=500, batch_size=1, verbose=2)#summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsforpatternindata X:x = numpy. reshape(pattern, (1, 1,len(pattern)))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 18: LSTM Network for three-char features to one-char Mapping. Running this example provides the following output. Model Accuracy: 86. 96%[ A, B, C ]-> D[ B, C, D ]-> E[ C, D, E ]-> F[ D, E, F ]-> G[ E, F, G ]-> H[ F, G, H ]-> I[ G, H, I ]-> J[ H, I, J ]-> K[ I, J, K ]-> L[ J, K, L ]-> M[ K, L, M ]-> N[ L, M, N ]-> O
27. 4. LSTM for a Time Step Window to One-Char Mapping216[ M, N, O ]-> P[ N, O, P ]-> Q[ O, P, Q ]-> R[ P, Q, R ]-> S[ Q, R, S ]-> T[ R, S, T ]-> U[ S, T, U ]-> V[ T, U, V ]-> Y[ U, V, W ]-> Z[ V, W, X ]-> Z[ W, X, Y ]-> ZListing 27. 19: Output from the three-char Features to one-char Mapping. We can see a small lift in performance that may or may not be real. This is a simple problemthat we were still not able to learn with LSTMs even with the window method. Again, thisis a misuse of the LSTM network by a poor framing of the problem. Indeed, the sequences ofletters are time steps of one feature rather than one time step of separate features. We havegiven more context to the network, but not more sequence as it expected. In the next section,we will give more context to the network in the form of time steps. 27. 4 LSTM for a Time Step Window to One-Char Map-ping In Keras, the intended use of LSTMs is to provide context in the form of time steps, rather thanwindowed features like with other network types. We can take our first example and simplychange the sequence length from 1 to 3. seq_length = 3Listing 27. 20: Increase Sequence Length. Again, this creates input-output pairs that look like:ABC-> DBCD-> ECDE-> FDEF-> GListing 27. 21: Sample of Longer Input Sequence Length. The di↵erence is that the reshaping of the input data takes the sequence as a time stepsequence of one feature, rather than a single time step of multiple features. #reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), seq_length, 1))Listing 27. 22: Reshape Input so Sequence is Time Steps. This is the intended use of providing sequence context to your LSTM in Keras. The fullcode example is provided below for completeness. #Naive LSTMtolearnthree-chartimestepstoone-charmappingimportnumpyfromkeras. modelsimport Sequential
27. 4. LSTM for a Time Step Window to One-Char Mapping217fromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utils#fixrandomseedforreproducibilitynumpy. random. seed(7)#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 3data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])printseq_in,->, seq_out#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), seq_length, 1))#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelmodel = Sequential()model. add(LSTM(32, input_shape=(X. shape[1], X. shape[2])))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=500, batch_size=1, verbose=2)#summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsforpatternindata X:x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 23: LSTM Network for three-char Time Steps to one-char Mapping. Running this example provides the following output. Model Accuracy: 100. 00%[ A, B, C ]-> D[ B, C, D ]-> E[ C, D, E ]-> F[ D, E, F ]-> G[ E, F, G ]-> H[ F, G, H ]-> I[ G, H, I ]-> J
27. 5. LSTM State Maintained Between Samples Within A Batch218[ H, I, J ]-> K[ I, J, K ]-> L[ J, K, L ]-> M[ K, L, M ]-> N[ L, M, N ]-> O[ M, N, O ]-> P[ N, O, P ]-> Q[ O, P, Q ]-> R[ P, Q, R ]-> S[ Q, R, S ]-> T[ R, S, T ]-> U[ S, T, U ]-> V[ T, U, V ]-> W[ U, V, W ]-> X[ V, W, X ]-> Y[ W, X, Y ]-> ZListing 27. 24: Output from the three-char Time Steps to one-char Mapping. We can see that the model learns the problem perfectly as evidenced by the model evaluationand the example predictions. But it has learned a simpler problem. Specifically, it has learnedto predict the next letter from a sequence of three letters in the alphabet. It can be shown anyrandom sequence of three letters from the alphabet and predict the next letter. It cannot actually enumerate the alphabet. I expect that a larger enough Multilayer Perceptron network might be able to learn the same mapping using the window method. The LSTM networks are stateful. They should be able to learn the whole alphabet sequence, but bydefault the Keras implementation resets the network state after each training batch. 27. 5 LSTM State Maintained Between Samples Within AB a t c h The Keras implementation of LSTMs resets the state of the network after each batch. Thissuggests that if we had a batch size large enough to hold all input patterns and if all the inputpatterns were ordered sequentially, that the LSTM could use the context of the sequence withinthe batch to better learn the sequence. We can demonstrate this easily by modifying the firstexample for learning a one-to-one mapping and increasing the batch size from 1 to the sizeof the training dataset. Additionally, Keras shu✏es the training dataset before each trainingepoch. To ensure the training data patterns remain sequential, we can disable this shu✏ing. model. fit(X, y, nb_epoch=5000, batch_size=len(data X), verbose=2, shuffle=False)Listing 27. 25: Increase Batch Size to Cover Entire Dataset. The network will learn the mapping of characters using the within-batch sequence, but thiscontext will not be available to the network when making predictions. We can evaluate boththe ability of the network to make predictions randomly and in sequence. The full code exampleis provided below for completeness. #Naive LSTMtolearnone-chartoone-charmappingwithalldataineachbatchimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Dense
27. 5. LSTM State Maintained Between Samples Within A Batch219fromkeras. layersimport LSTMfromkeras. utilsimportnp_utilsfromkeras. preprocessing. sequenceimportpad_sequences#fixrandomseedforreproducibilitynumpy. random. seed(7)#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 1data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])printseq_in,->, seq_out#convertlistofliststoarrayandpadsequencesifneeded X = pad_sequences(data X, maxlen=seq_length, dtype= float32 )#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (X. shape[0], seq_length, 1))#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelmodel = Sequential()model. add(LSTM(16, input_shape=(X. shape[1], X. shape[2])))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=5000, batch_size=len(data X), verbose=2, shuffle=False)#summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsforpatternindata X:x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result#demonstratepredictingrandompatternsprint"Testa Random Pattern:"foriinrange(0,20):pattern_index = numpy. random. randint(len(data X))pattern = data X[pattern_index]x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]
27. 5. LSTM State Maintained Between Samples Within A Batch220seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 26: LSTM Network for one-char to one-char Mapping Within Batch. Running the example provides the following output. Model Accuracy: 100. 00%[ A ]-> B[ B ]-> C[ C ]-> D[ D ]-> E[ E ]-> F[ F ]-> G[ G ]-> H[ H ]-> I[ I ]-> J[ J ]-> K[ K ]-> L[ L ]-> M[ M ]-> N[ N ]-> O[ O ]-> P[ P ]-> Q[ Q ]-> R[ R ]-> S[ S ]-> T[ T ]-> U[ U ]-> V[ V ]-> W[ W ]-> X[ X ]-> Y[ Y ]-> ZTest a Random Pattern:[ T ]-> U[ V ]-> W[ M ]-> N[ Q ]-> R[ D ]-> E[ V ]-> W[ T ]-> U[ U ]-> V[ J ]-> K[ F ]-> G[ N ]-> O[ B ]-> C[ M ]-> N[ F ]-> G[ F ]-> G[ P ]-> Q[ A ]-> B[ K ]-> L[ W ]-> X[ E ]-> FListing 27. 27: Output from the one-char to one-char Mapping Within Batch.
27. 6. Stateful LSTM for a One-Char to One-Char Mapping221As we expected, the network is able to use the within-sequence context to learn the alphabet,achieving 100% accuracy on the training data. Importantly, the network can make accuratepredictions for the next letter in the alphabet for randomly selected characters. Very impressive. 27. 6 Stateful LSTM for a One-Char to One-Char Map-ping We have seen that we can break-up our raw data into fixed size sequences and that thisrepresentation can be learned by the LSTM, but only to learn random mappings of 3 charactersto 1 character. We have also seen that we can pervert batch size to o↵er more sequence to thenetwork, but only during training. Ideally, we want to expose the network to the entire sequenceand let it learn the inter-dependencies, rather than us define those dependencies explicitly inthe framing of the problem. We can do this in Keras by making the LSTM layers stateful and manually resetting thestate of the network at the end of the epoch, which is also the end of the training sequence. This is truly how the LSTM networks are intended to be used. We find that by allowing thenetwork itself to learn the dependencies between the characters, that we need a smaller network(half the number of units) and fewer training epochs (almost half). We first need to define our LSTM layer as stateful. In so doing, we must explicitly specify the batch size as a dimension onthe input shape. This also means that when we evaluate the network or make predictions, wemust also specify and adhere to this same batch size. This is not a problem now as we are usingab a t c hs i z eo f1. T h i sc o u l di n t r o d u c ed iculties when making predictions when the batch sizeis not one as predictions will need to be made in batch and in sequence. batch_size = 1model. add(LSTM(16, batch_input_shape=(batch_size, X. shape[1], X. shape[2]), stateful=True))Listing 27. 28: Define a Stateful LSTM Layer. An important di↵erence in training the stateful LSTM is that we train it manually oneepoch at a time and reset the state after each epoch. We can do this in a for loop. Again, wedo not shu✏e the input, preserving the sequence in which the input training data was created. foriinrange(300):model. fit(X, y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)model. reset_states()Listing 27. 29: Manually Manage LSTM State For Each Epoch. As mentioned, we specify the batch size when evaluating the performance of the network onthe entire training dataset. #summarizeperformanceofthemodelscores = model. evaluate(X, y, batch_size=batch_size, verbose=0)model. reset_states()print("Model Accuracy:%. 2f%%"% (scores[1]*100))Listing 27. 30: Evaluate Model Using Pre-defined Batch Size. Finally, we can demonstrate that the network has indeed learned the entire alphabet. Wecan seed it with the first letter A, request a prediction, feed the prediction back in as an input,and repeat the process all the way to Z.
27. 6. Stateful LSTM for a One-Char to One-Char Mapping222#demonstratesomemodelpredictionsseed = [char_to_int[alphabet[0]]]foriinrange(0,len(alphabet)-1):x = numpy. reshape(seed, (1,len(seed), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)printint_to_char[seed[0]],"->", int_to_char[index]seed = [index]model. reset_states()Listing 27. 31: Seed Network and Make Predictions from A to Z. We can also see if the network can make predictions starting from an arbitrary letter. #demonstratearandomstartingpointletter ="K"seed = [char_to_int[letter]]print"Newstart:", letterforiinrange(0, 5):x = numpy. reshape(seed, (1,len(seed), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)printint_to_char[seed[0]],"->", int_to_char[index]seed = [index]model. reset_states()Listing 27. 32: Seed Network with a Random Letter and a Sequence of Predictions. The entire code listing is provided below for completeness. #Stateful LSTMtolearnone-chartoone-charmappingimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utils#fixrandomseedforreproducibilitynumpy. random. seed(7)#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 1data X = []data Y = []foriinrange(0,len(alphabet)-seq_length, 1):seq_in = alphabet[i:i + seq_length]seq_out = alphabet[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])printseq_in,->, seq_out#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (len(data X), seq_length, 1))
27. 6. Stateful LSTM for a One-Char to One-Char Mapping223#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelbatch_size = 1model = Sequential()model. add(LSTM(16, batch_input_shape=(batch_size, X. shape[1], X. shape[2]), stateful=True))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])foriinrange(300):model. fit(X, y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)model. reset_states()#summarizeperformanceofthemodelscores = model. evaluate(X, y, batch_size=batch_size, verbose=0)model. reset_states()print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsseed = [char_to_int[alphabet[0]]]foriinrange(0,len(alphabet)-1):x = numpy. reshape(seed, (1,len(seed), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)printint_to_char[seed[0]],"->", int_to_char[index]seed = [index]model. reset_states()#demonstratearandomstartingpointletter ="K"seed = [char_to_int[letter]]print"Newstart:", letterforiinrange(0, 5):x = numpy. reshape(seed, (1,len(seed), 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)printint_to_char[seed[0]],"->", int_to_char[index]seed = [index]model. reset_states()Listing 27. 33: Stateful LSTM Network for one-char to one-char Mapping. Running the example provides the following output. Model Accuracy: 100. 00%A-> BB-> CC-> DD-> EE-> FF-> GG-> HH-> II-> JJ-> KK-> LL-> M
27. 7. LSTM with Variable Length Input to One-Char Output224M-> NN-> OO-> PP-> QQ-> RR-> SS-> TT-> UU-> VV-> WW-> XX-> YY-> ZNew start: KK-> BB-> CC-> DD-> EE-> FListing 27. 34: Output from the Stateful LSTM for one-char to one-char Mapping. We can see that the network has memorized the entire alphabet perfectly. It used thecontext of the samples themselves and learned whatever dependency it needed to predict thenext character in the sequence. We can also see that if we seed the network with the first letter,that it can correctly rattle o↵the rest of the alphabet. We can also see that it has only learnedthe full alphabet sequence and that from a cold start. When asked to predict the next letterfrom Kthat it predicts Band falls back into regurgitating the entire alphabet. To truly predict K, the state of the network would need to be warmed up iteratively fed the letters from Ato J. This tells us that we could achieve the same e↵ect with astateless LSTM by preparing trainingdata like:---a-> b--ab-> c-abc-> dabcd-> e Listing 27. 35: Sample of Equivalent Training Data for Non-Stateful LSTM Layers. Where the input sequence is fixed at 25 (a-to-yto predictz)a n dp a t t e r n sa r ep r e fi x e dw i t hzero-padding. Finally, this raises the question of training an LSTM network using variablelength input sequences to predict the next character. 27. 7 LSTM with Variable Length Input to One-Char Output In the previous section we discovered that the Kerasstateful LSTM was really only a short cutto replaying the firstn-sequences, but didn't really help us learn a generic model of the alphabet. In this section we explore a variation of thestateless LSTM that learns random subsequences ofthe alphabet and an e↵ort to build a model that can be given arbitrary letters or subsequencesof letters and predict the next letter in the alphabet.
27. 7. LSTM with Variable Length Input to One-Char Output225Firstly, we are changing the framing of the problem. To simplify we will define a maximuminput sequence length and set it to a small value like 5 to speed up training. This defines themaximum length of subsequences of the alphabet will be drawn for training. In extensions, thiscould just as set to the full alphabet (26) or longer if we allow looping back to the start of thesequence. We also need to define the number of random sequences to create, in this case 1,000. This too could be more or less. I expect less patterns are actually required. #preparethedatasetofinputtooutputpairsencodedasintegersnum_inputs = 1000max_len = 5data X = []data Y = []foriinrange(num_inputs):start = numpy. random. randint(len(alphabet)-2)end = numpy. random. randint(start,min(start+max_len,len(alphabet)-1))sequence_in = alphabet[start:end+1]sequence_out = alphabet[end + 1]data X. append([char_to_int[char]forcharinsequence_in])data Y. append(char_to_int[sequence_out])printsequence_in,->, sequence_out Listing 27. 36: Create Dataset of Variable Length Input Sequences. Running this code in the broader context will create input patterns that look like thefollowing:PQRST-> UW-> XO-> POPQ-> RIJKLM-> NQRSTU-> VABCD-> EX-> YGHIJ-> KListing 27. 37: Sample of Variable Length Input Sequences. The input sequences vary in length between 1 andmaxlenand therefore require zero padding. Here, we use left-hand-side (prefix) padding with the Keras built inpadsequences()function. X = pad_sequences(data X, maxlen=max_len, dtype= float32 )Listing 27. 38: Left-Pad Variable Length Input Sequences. The trained model is evaluated on randomly selected input patterns. This could just aseasily be new randomly generated sequences of characters. I also believe this could also be alinear sequence seeded with Awith outputs fed back in as single character inputs. The full codelisting is provided below for completeness. #LSTMwith Variable Length Input Sequencesto One Character Outputimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport LSTMfromkeras. utilsimportnp_utilsfromkeras. preprocessing. sequenceimportpad_sequences
27. 7. LSTM with Variable Length Input to One-Char Output226#fixrandomseedforreproducibilitynumpy. random. seed(7)#definetherawdatasetalphabet ="ABCDEFGHIJKLMNOPQRSTUVWXYZ"#createmappingofcharacterstointegers(0-25)andthereversechar_to_int =dict((c, i)fori, cinenumerate(alphabet))int_to_char =dict((i, c)fori, cinenumerate(alphabet))#preparethedatasetofinputtooutputpairsencodedasintegersnum_inputs = 1000max_len = 5data X = []data Y = []foriinrange(num_inputs):start = numpy. random. randint(len(alphabet)-2)end = numpy. random. randint(start,min(start+max_len,len(alphabet)-1))sequence_in = alphabet[start:end+1]sequence_out = alphabet[end + 1]data X. append([char_to_int[char]forcharinsequence_in])data Y. append(char_to_int[sequence_out])printsequence_in,->, sequence_out#convertlistofliststoarrayandpadsequencesifneeded X = pad_sequences(data X, maxlen=max_len, dtype= float32 )#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(X, (X. shape[0], max_len, 1))#normalize X=X/float(len(alphabet))#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#createandfitthemodelbatch_size = 1model = Sequential()model. add(LSTM(32, input_shape=(X. shape[1], 1)))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])model. fit(X, y, nb_epoch=500, batch_size=batch_size, verbose=2)#summarizeperformanceofthemodelscores = model. evaluate(X, y, verbose=0)print("Model Accuracy:%. 2f%%"% (scores[1]*100))#demonstratesomemodelpredictionsforiinrange(20):pattern_index = numpy. random. randint(len(data X))pattern = data X[pattern_index]x = pad_sequences([pattern], maxlen=max_len, dtype= float32 )x = numpy. reshape(x, (1, max_len, 1))x=x/float(len(alphabet))prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]printseq_in,"->", result Listing 27. 39: LSTM Network for Variable Length Sequences to one-char Mapping. Running this code produces the following output:Model Accuracy: 98. 90%[ Q, R ]-> S
27. 8. Summary227[ W, X ]-> Y[ W, X ]-> Y[ C, D ]-> E[ E ]-> F[ S, T, U ]-> V[ G, H, I, J, K ]-> L[ O, P, Q, R, S ]-> T[ C, D ]-> E[ O ]-> P[ N, O, P ]-> Q[ D, E, F, G, H ]-> I[ X ]-> Y[ K ]-> L[ M ]-> N[ R ]-> T[ K ]-> L[ E, F, G ]-> H[ Q ]-> R[ Q, R, S ]-> TListing 27. 40: Output for the LSTM Network for Variable Length Sequences to one-char Mapping. We can see that although the model did not learn the alphabet perfectly from the randomlygenerated subsequences, it did very well. The model was not tuned and may require moretraining or a larger network, or both (an exercise for the reader). This is a good naturalextension to theall sequential input examples in each batchalphabet model learned above inthat it can handle ad hoc queries, but this time of arbitrary sequence length (up to the maxlength). 27. 8 Summary In this lesson you discovered LSTM recurrent neural networks in Keras and how they managestate. Specifically, you learned: How to develop a naive LSTM network for one-character to one-character prediction. How to configure a naive LSTM to learn a sequence across time steps within a sample. How to configure an LSTM to learn a sequence across samples by manually managingstate. 27. 8. 1 Next In this lesson you developed your understanding for how LSTM networks maintain state forsimple sequence prediction problems. Up next you will use your understanding of LSTMnetworks to develop larger text generation models.
Chapter 28Project: Text Generation With Alicein Wonderland Recurrent neural networks can also be used as generative models. This means that in additionto being used for predictive models (making predictions) they can learn the sequences of aproblem and then generate entirely new plausible sequences for the problem domain. Generativemodels like this are useful not only to study how well a model has learned a problem, but tolearn more about the problem domain itself. In this project you will discover how to createag e n e r a t i v em o d e lf o rt e x t,c h a r a c t e r-b y-c h a r a c t e ru s i n g L S T Mr e c u r r e n tn e u r a ln e t w o r k si n Python with Keras. After completing this project you will know: Where to download a free corpus of text that you can use to train text generative models. How to frame the problem of text sequences to a recurrent neural network generativemodel. How to develop an LSTM to generate plausible text sequences for a given problem. Let's get started. Note:You may want to speed up the computation for this tutorial by using GPU ratherthan CPU hardware, such as the process described in Chapter5. This is a suggestion, not arequirement. The tutorial will work just fine on the CPU. 28. 1 Problem Description: Text Generation Many of the classical texts are no longer protected under copyright. This means that you candownload all of the text for these books for free and use them in experiments, like creatinggenerative models. Perhaps the best place to get access to free books that are no longer protectedby copyright is Project Gutenberg1. In this tutorial we are going to use a favorite book fromchildhood as the dataset: Alice's Adventures in Wonderland by Lewis Carroll2. We are going to learn the dependencies between characters and the conditional probabilitiesof characters in sequences so that we can in turn generate wholly new and original sequencesof characters. This tutorial is a lot of fun and I recommend repeating these experiments with1https://www. gutenberg. org/2https://www. gutenberg. org/ebooks/11228
28. 2. Develop a Small LSTM Recurrent Neural Network229other books from Project Gutenberg. These experiments are not limited to text, you can alsoexperiment with other ASCII data, such as computer source code, marked up documents in La Te X, HTML or Markdown and more. You can download the complete text in ASCII format (Plain Text UTF-8)3for this book forfree and place it in your working directory with the filenamewonderland. txt. Now we need toprepare the dataset ready for modeling. Project Gutenberg adds a standard header and footerto each book and this is not part of the original text. Open the file in a text editor and deletethe header and footer. The header is obvious and ends with the text:*** START OF THIS PROJECT GUTENBERG EBOOK ALICE SADVENTURESINWONDERLAND***Listing 28. 1: Signal of the End of the File Header. The footer is all of the text after the line of text that says:THE ENDListing 28. 2: Signal of the Start of the File Footer. You should be left with a text file that has about 3,330 lines of text. 28. 2 Develop a Small LSTM Recurrent Neural Network In this section we will develop a simple LSTM network to learn sequences of characters from Alice in Wonderland. In the next section we will use this model to generate new sequences ofcharacters. Let's start o↵by importing the classes and functions we intend to use to train ourmodel. importnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport LSTMfromkeras. callbacksimport Model Checkpointfromkeras. utilsimportnp_utils Listing 28. 3: Import Classes and Functions. Next we need to load the ASCII text for the book into memory and convert all of thecharacters to lowercase to reduce the vocabulary that the network must learn. #loadasciitextandcoverttolowercasefilename ="wonderland. txt"raw_text =open(filename). read()raw_text = raw_text. lower()Listing 28. 4: Load the Dataset and Covert to Lowercase. Now that the book is loaded, we must prepare the data for modeling by the neural network. We cannot model the characters directly, instead we must convert the characters to integers. We can do this easily by first creating a set of all of the distinct characters in the book, thencreating a map of each character to a unique integer. 3http://www. gutenberg. org/cache/epub/11/pg11. txt
28. 2. Develop a Small LSTM Recurrent Neural Network230#createmappingofuniquecharstointegers,andareversemappingchars =sorted(list(set(raw_text)))char_to_int =dict((c, i)fori, cinenumerate(chars))Listing 28. 5: Create Char-to-Integer Mapping. For example, the list of unique sorted lowercase characters in the book is as follows:[ \n, \r,, !, "," ", (, ), *,,,-,., :, ;, ?, [, ], _, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, \xbb, \xbf, \xef ]Listing 28. 6: List of Unique Characters in the Dataset. You can see that there may be some characters that we could remove to further clean upthe dataset that will reduce the vocabulary and may improve the modeling process. Now thatthe book has been loaded and the mapping prepared, we can summarize the dataset. n_chars =len(raw_text)n_vocab =len(chars)print"Total Characters:", n_charsprint"Total Vocab:", n_vocab Listing 28. 7: Summarize the Loaded Dataset. Running the code to this point produces the following output. Total Characters: 147674Total Vocab: 47Listing 28. 8: Output from Summarize the Dataset. We can see that the book has just under 150,000 characters and that when converted tolowercase that there are only 47 distinct characters in the vocabulary for the network to learn. Much more than the 26 in the alphabet. We now need to define the training data for thenetwork. There is a lot of flexibility in how you choose to break up the text and expose it tothe network during training. In this tutorial we will split the book text up into subsequenceswith a fixed length of 100 characters, an arbitrary length. We could just as easily split the dataup by sentences and pad the shorter sequences and truncate the longer ones. Each training pattern of the network is comprised of 100 time steps of one character (X)followed by one character output (y). When creating these sequences, we slide this window alongthe whole book one character at a time, allowing each character a chance to be learned fromthe 100 characters that preceded it (except the first 100 characters of course). For example, ifthe sequence length is 5 (for simplicity) then the first two training patterns would be as follows:CHAPT-> EHAPTE-> RListing 28. 9: Example of Sequence Construction. As we split up the book into these sequences, we convert the characters to integers using ourlookup table we prepared earlier. #preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 100data X = []
28. 2. Develop a Small LSTM Recurrent Neural Network231data Y = []foriinrange(0, n_chars-seq_length, 1):seq_in = raw_text[i:i + seq_length]seq_out = raw_text[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])n_patterns =len(data X)print"Total Patterns:", n_patterns Listing 28. 10: Create Input/Output Patterns from Raw Dataset. Running the code to this point shows us that when we split up the dataset into trainingdata for the network to learn that we have just under 150,000 training pattens. This makessense as excluding the first 100 characters, we have one training pattern to predict each of theremaining characters. Total Patterns: 147574Listing 28. 11: Output Summary of Created Patterns. Now that we have prepared our training data we need to transform it so that it is suitablefor use with Keras. First we must transform the list of input sequences into the form [samples,time steps, features] expected by an LSTM network. Next we need to rescale the integers to therange 0-to-1 to make the patterns easier to learn by the LSTM network that uses the sigmoidactivation function by default. Finally, we need to convert the output patterns (single characters converted to integers) intoa one hot encoding. This is so that we can configure the network to predict the probabilityof each of the 47 di↵erent characters in the vocabulary (an easier representation) rather thantrying to force it to predict precisely the next character. Each y value is converted into a sparsevector with a length of 47, full of zeros except with a 1 in the column for the letter (integer)that the pattern represents. For example, whenn(integer value 31) is one hot encoded it looksas follows:[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]Listing 28. 12: Sample of One Hot Encoded Output Integer. We can implement these steps as below. #reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (n_patterns, seq_length, 1))#normalize X=X/float(n_vocab)#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)Listing 28. 13: Prepare Data Ready For Modeling. We can now define our LSTM model. Here we define a single hidden LSTM layer with 256memory units. The network uses dropout with a probability of 20%. The output layer is a Dense layer using the softmax activation function to output a probability prediction for eachof the 47 characters between 0 and 1. The problem is really a single character classificationproblem with 47 classes and as such is defined as optimizing the log loss (cross entropy), hereusing the ADAM optimization algorithm for speed.
28. 2. Develop a Small LSTM Recurrent Neural Network232#definethe LSTMmodelmodel = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2])))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam )Listing 28. 14: Create LSTM Network to Model the Dataset. There is no test dataset. We are modeling the entire training dataset to learn the probabilityof each character in a sequence. We are not interested in the most accurate (classificationaccuracy) model of the training dataset. This would be a model that predicts each characterin the training dataset perfectly. Instead we are interested in a generalization of the datasetthat minimizes the chosen loss function. We are seeking a balance between generalization andoverfitting but short of memorization. The network is slow to train (about 300 seconds per epoch on an Nvidia K520 GPU). Becauseof the slowness and because of our optimization requirements, we will use model checkpointingto record all of the network weights to file each time an improvement in loss is observed at theend of the epoch. We will use the best set of weights (lowest loss) to instantiate our generativemodel in the next section. #definethecheckpointfilepath="weights-improvement-{epoch:02d}-{loss:. 4f}. hdf5"checkpoint = Model Checkpoint(filepath, monitor= loss, verbose=1, save_best_only=True,mode= min )callbacks_list = [checkpoint]Listing 28. 15: Create Checkpoints For Best Seen Model. We can now fit our model to the data. Here we use a modest number of 20 epochs and alarge batch size of 128 patterns. model. fit(X, y, nb_epoch=20, batch_size=128, callbacks=callbacks_list)Listing 28. 16: Fit the Model. The full code listing is provided below for completeness. #Small LSTMNetworkto Generate Textfor Alicein Wonderlandimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport LSTMfromkeras. callbacksimport Model Checkpointfromkeras. utilsimportnp_utils#loadasciitextandcoverttolowercasefilename ="wonderland. txt"raw_text =open(filename). read()raw_text = raw_text. lower()#createmappingofuniquecharstointegerschars =sorted(list(set(raw_text)))char_to_int =dict((c, i)fori, cinenumerate(chars))#summarizetheloadeddatan_chars =len(raw_text)
28. 2. Develop a Small LSTM Recurrent Neural Network233n_vocab =len(chars)print"Total Characters:", n_charsprint"Total Vocab:", n_vocab#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 100data X = []data Y = []foriinrange(0, n_chars-seq_length, 1):seq_in = raw_text[i:i + seq_length]seq_out = raw_text[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])n_patterns =len(data X)print"Total Patterns:", n_patterns#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (n_patterns, seq_length, 1))#normalize X=X/float(n_vocab)#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#definethe LSTMmodelmodel = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2])))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam )#definethecheckpointfilepath="weights-improvement-{epoch:02d}-{loss:. 4f}. hdf5"checkpoint = Model Checkpoint(filepath, monitor= loss, verbose=1, save_best_only=True,mode= min )callbacks_list = [checkpoint]#fitthemodelmodel. fit(X, y, nb_epoch=20, batch_size=128, callbacks=callbacks_list)Listing 28. 17: Complete Code Listing for LSTM to Model Dataset. You will see di↵erent results because of the stochastic nature of the model, and because it ishard to fix the random seed for LSTM models to get 100% reproducible results. This is not aconcern for this generative model. After running the example, you should have a number ofweight checkpoint files in the local directory. You can delete them all except the one with thesmallest loss value. For example, when I ran this example, below was the checkpoint with thesmallest loss that I achieved. weights-improvement-19-1. 9435. hdf5Listing 28. 18: Sample of Checkpoint Weights for Well Performing Model. he network loss decreased almost every epoch and I expect the network could benefit fromtraining for many more epochs. In the next section we will look at using this model to generatenew text sequences.
28. 3. Generating Text with an LSTM Network23428. 3 Generating Text with an LSTM Network Generating text using the trained LSTM network is relatively straightforward. Firstly, we loadthe data and define the network in exactly the same way, except the network weights are loadedfrom a checkpoint file and the network does not need to be trained. #loadthenetworkweightsfilename ="weights-improvement-19-1. 9435. hdf5"model. load_weights(filename)model. compile(loss= categorical_crossentropy, optimizer= adam )Listing 28. 19: Load Checkpoint Network Weights. Also, when preparing the mapping of unique characters to integers, we must also createar e v e r s em a p p i n gt h a tw ec a nu s et oc o n v e r tt h ei n t e g e r sb a c kt oc h a r a c t e r ss ot h a tw ec a nunderstand the predictions. int_to_char =dict((i, c)fori, cinenumerate(chars))Listing 28. 20: Mapping from Integers to Characters. Finally, we need to actually make predictions. The simplest way to use the Keras LSTMmodel to make predictions is to first start o↵with a seed sequence as input, generate the nextcharacter then update the seed sequence to add the generated character on the end and trim o↵the first character. This process is repeated for as long as we want to predict new characters(e. g. a sequence of 1,000 characters in length). We can pick a random input pattern as our seedsequence, then print generated characters as we generate them. #pickarandomseedstart = numpy. random. randint(0,len(data X)-1)pattern = data X[start]print"Seed:"print"\"",. join([int_to_char[value]forvalueinpattern]),"\""#generatecharactersforiinrange(1000):x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(n_vocab)prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]sys. stdout. write(result)pattern. append(index)pattern = pattern[1:len(pattern)]print"\n Done. "Listing 28. 21: Seed Network and Generate Text. The full code example for generating text using the loaded LSTM model is listed below forcompleteness. #Load LSTMnetworkandgeneratetextimportsysimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Dense
28. 3. Generating Text with an LSTM Network235fromkeras. layersimport Dropoutfromkeras. layersimport LSTMfromkeras. callbacksimport Model Checkpointfromkeras. utilsimportnp_utils#loadasciitextandcoverttolowercasefilename ="wonderland. txt"raw_text =open(filename). read()raw_text = raw_text. lower()#createmappingofuniquecharstointegers,andareversemappingchars =sorted(list(set(raw_text)))char_to_int =dict((c, i)fori, cinenumerate(chars))int_to_char =dict((i, c)fori, cinenumerate(chars))#summarizetheloadeddatan_chars =len(raw_text)n_vocab =len(chars)print"Total Characters:", n_charsprint"Total Vocab:", n_vocab#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 100data X = []data Y = []foriinrange(0, n_chars-seq_length, 1):seq_in = raw_text[i:i + seq_length]seq_out = raw_text[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])n_patterns =len(data X)print"Total Patterns:", n_patterns#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (n_patterns, seq_length, 1))#normalize X=X/float(n_vocab)#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#definethe LSTMmodelmodel = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2])))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))#loadthenetworkweightsfilename ="weights-improvement-19-1. 9435. hdf5"model. load_weights(filename)model. compile(loss= categorical_crossentropy, optimizer= adam )#pickarandomseedstart = numpy. random. randint(0,len(data X)-1)pattern = data X[start]print"Seed:"print"\"",. join([int_to_char[value]forvalueinpattern]),"\""#generatecharactersforiinrange(1000):x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(n_vocab)prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]
28. 3. Generating Text with an LSTM Network236sys. stdout. write(result)pattern. append(index)pattern = pattern[1:len(pattern)]print"\n Done. "Listing 28. 22: Complete Code Listing for Generating Text for the Small LSTM Network. Running this example first outputs the selected random seed, then each character as it isgenerated. For example, below are the results from one run of this text generator. The randomseed was:be no mistake about it: it was neither more nor less than a pig,andshefelt that it would be quit Listing 28. 23: Randomly Selected Sequence Used to Seed the Network. The generated text with the random seed (cleaned up for presentation) was:be no mistake about it: it was neither more nor less than a pig,andshefelt that it would be quit e aelin that she was a little want oe toietano a grtpersent to the tas a little war th tee the tase oa teetteethe had been tinhgtt a little toiee at the cadlinalongtuiee aedunthet sheer was a little tare gereen to be a gentle of the tabdit soeneethe gad ouw ie the tay a tirt of toiet at the was a littleanonersen,andthiu had been woite io a lott of tueh a tiieandtaedebot her aeain she cere thth the bene tith the tere bane to teetoaete to tee the harter was a little tire the same oare cade an anl anothe gareeandthe was so seat the was a little gareenandthe sabdit,andthe white rabbit wese tilel an the caoeandthe sabbit se teeteer,andthe white rabbit wese tilel an the cadeina lonk tfne the sabdiano aroing to tea the was sf teet whitg the was a little tane oo thetethe sabeit she was a little tartig to the tar tf tee the tame of thecagd,andthe white rabbit was a little toiee to be anle tite thete ofsandthe tabdit was the wiite rabbit,and Listing 28. 24: Generated Text With Random Seed Text. We can note some observations about the generated text. It generally conforms to the line format observed in the original text of less than 80characters before a new line. The characters are separated into word-like groups and most groups are actual Englishwords (e. g. the,littleandwas), but many do not (e. g. lott,tiieandtaede). Some of the words in sequence make sense(e. g. and the white rabbit), but many do not(e. g. wese tilel). The fact that this character based model of the book produces output like this is veryimpressive. It gives you a sense of the learning capabilities of LSTM networks. The results arenot perfect. In the next section we look at improving the quality of results by developing amuch larger LSTM network.
28. 4. Larger LSTM Recurrent Neural Network23728. 4 Larger LSTM Recurrent Neural Network We got results, but not excellent results in the previous section. Now, we can try to improvethe quality of the generated text by creating a much larger network. We will keep the numberof memory units the same at 256, but add a second layer. model = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2]), return_sequences=True))model. add(Dropout(0. 2))model. add(LSTM(256))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam )Listing 28. 25: Define a Stacked LSTM Network. We will also change the filename of the checkpointed weights so that we can tell the di↵erencebetween weights for this network and the previous (by appending the wordbiggerin thefilename). filepath="weights-improvement-{epoch:02d}-{loss:. 4f}-bigger. hdf5"Listing 28. 26: Filename for Checkpointing Network Weights for Larger Model. Finally, we will increase the number of training epochs from 20 to 50 and decrease the batchsize from 128 to 64 to give the network more of an opportunity to be updated and learn. Thefull code listing is presented below for completeness. #Larger LSTMNetworkto Generate Textfor Alicein Wonderlandimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport LSTMfromkeras. callbacksimport Model Checkpointfromkeras. utilsimportnp_utils#loadasciitextandcoverttolowercasefilename ="wonderland. txt"raw_text =open(filename). read()raw_text = raw_text. lower()#createmappingofuniquecharstointegerschars =sorted(list(set(raw_text)))char_to_int =dict((c, i)fori, cinenumerate(chars))#summarizetheloadeddatan_chars =len(raw_text)n_vocab =len(chars)print"Total Characters:", n_charsprint"Total Vocab:", n_vocab#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 100data X = []data Y = []foriinrange(0, n_chars-seq_length, 1):seq_in = raw_text[i:i + seq_length]seq_out = raw_text[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])
28. 4. Larger LSTM Recurrent Neural Network238n_patterns =len(data X)print"Total Patterns:", n_patterns#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (n_patterns, seq_length, 1))#normalize X=X/float(n_vocab)#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#definethe LSTMmodelmodel = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2]), return_sequences=True))model. add(Dropout(0. 2))model. add(LSTM(256))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))model. compile(loss= categorical_crossentropy, optimizer= adam )#definethecheckpointfilepath="weights-improvement-{epoch:02d}-{loss:. 4f}-bigger. hdf5"checkpoint = Model Checkpoint(filepath, monitor= loss, verbose=1, save_best_only=True,mode= min )callbacks_list = [checkpoint]#fitthemodelmodel. fit(X, y, nb_epoch=50, batch_size=64, callbacks=callbacks_list)Listing 28. 27: Complete Code Listing for the Larger LSTM Network. Running this example takes some time, at least 700 seconds per epoch. After running thisexample you may achieved a loss of about 1. 2. For example the best result I achieved fromrunning this model was stored in a checkpoint file with the name:weights-improvement-47-1. 2219-bigger. hdf5Listing 28. 28: Sample of Checkpoint Weights for Well Performing Larger Model. Achieving a loss of 1. 2219 at epoch 47. As in the previous section, we can use this bestmodel from the run to generate text. The only change we need to make to the text generationscript from the previous section is in the specification of the network topology and from whichfile to seed the network weights. The full code listing is provided below for completeness. #Load Larger LSTMnetworkandgeneratetextimportsysimportnumpyfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. layersimport Dropoutfromkeras. layersimport LSTMfromkeras. callbacksimport Model Checkpointfromkeras. utilsimportnp_utils#loadasciitextandcoverttolowercasefilename ="wonderland. txt"raw_text =open(filename). read()raw_text = raw_text. lower()#createmappingofuniquecharstointegers,andareversemappingchars =sorted(list(set(raw_text)))char_to_int =dict((c, i)fori, cinenumerate(chars))int_to_char =dict((i, c)fori, cinenumerate(chars))#summarizetheloadeddata
28. 4. Larger LSTM Recurrent Neural Network239n_chars =len(raw_text)n_vocab =len(chars)print"Total Characters:", n_charsprint"Total Vocab:", n_vocab#preparethedatasetofinputtooutputpairsencodedasintegersseq_length = 100data X = []data Y = []foriinrange(0, n_chars-seq_length, 1):seq_in = raw_text[i:i + seq_length]seq_out = raw_text[i + seq_length]data X. append([char_to_int[char]forcharinseq_in])data Y. append(char_to_int[seq_out])n_patterns =len(data X)print"Total Patterns:", n_patterns#reshape Xtobe[samples,timesteps,features]X = numpy. reshape(data X, (n_patterns, seq_length, 1))#normalize X=X/float(n_vocab)#onehotencodetheoutputvariabley = np_utils. to_categorical(data Y)#definethe LSTMmodelmodel = Sequential()model. add(LSTM(256, input_shape=(X. shape[1], X. shape[2]), return_sequences=True))model. add(Dropout(0. 2))model. add(LSTM(256))model. add(Dropout(0. 2))model. add(Dense(y. shape[1], activation= softmax ))#loadthenetworkweightsfilename ="weights-improvement-47-1. 2219-bigger. hdf5"model. load_weights(filename)model. compile(loss= categorical_crossentropy, optimizer= adam )#pickarandomseedstart = numpy. random. randint(0,len(data X)-1)pattern = data X[start]print"Seed:"print"\"",. join([int_to_char[value]forvalueinpattern]),"\""#generatecharactersforiinrange(1000):x = numpy. reshape(pattern, (1,len(pattern), 1))x=x/float(n_vocab)prediction = model. predict(x, verbose=0)index = numpy. argmax(prediction)result = int_to_char[index]seq_in = [int_to_char[value]forvalueinpattern]sys. stdout. write(result)pattern. append(index)pattern = pattern[1:len(pattern)]print"\n Done. "Listing 28. 29: Complete Code Listing for Generating Text With the Larger LSTM Network. One example of running this text generation script produces the output below. The randomlychosen seed text was:d herself lying on the bank, with herheadinthe lap of her sister, who was gently brushing away s
28. 5. Extension Ideas to Improve the Model240Listing 28. 30: Randomly Selected Sequence Used to Seed the Network. The generated text with the seed (cleaned up for presentation) was:herself lying on the bank, with herheadinthe lap of her sister, who was gently brushing awayso siee,andshe sabbit said to herselfandthe sabbit said to herselfandthe soodway of the was a little that she was a little lad good to the garden,andthe sood of the mock turtle said to herself, itwasalittlethatthemockturtlesaidtoseeitsaidtoseaitsaidtoseaitsayitthemargehardsathnalittlethatshewassosereatedtoherself,andshesabbitsaidtoherself, it was a little little shated of the sooeof the coomouse it was a little lad good to the little gooder head. andsaid to herself, itwasalittlelittleshatedofthemouseofthegoodofthecourte,anditwasalittlelittleshatedinalittlethatthewasalittlelittleshatedofthethmeesaidtoseeitwasalittlebookofthewasalittlethatshewassosereatedtoharealittlethebegansiteeofthewasofthewasalittlethatshewassoseallyandthesabbitwasalittleladgoodtothelittlegooderheadofthegadsearedtoseeitwasalittleladgoodtothelittlegood Listing 28. 31: Generated Text With Random Seed Text. We can see that generally there are fewer spelling mistakes and the text looks more realistic,but is still quite nonsensical. For example the same phrases get repeated again and again likesaid to herselfandlittle. Quotes are opened but not closed. These are better results but thereis still a lot of room for improvement. 28. 5 Extension Ideas to Improve the Model Below are a sample of ideas that may that you may want to investigate to further improve themodel: Predict fewer than 1,000 characters as output for a given seed. Remove all punctuation from the source text, and therefore from the models' vocabulary. Try a one hot encoded for the input sequences. Train the model on padded sentences rather than random sequences of characters. Increase the number of training epochs to 100 or many hundreds. Add dropout to the visible input layer and consider tuning the dropout percentage. Tune the batch size, try a batch size of 1 as a (very slow) baseline and larger sizes fromthere. Add more memory units to the layers and/or more layers. Experiment with scale factors (temperature) when interpreting the prediction probabilities. Change the LSTM layers to bestatefulto maintain state across batches.
28. 6. Summary24128. 6 Summary In this project you discovered how you can develop an LSTM recurrent neural network for textgeneration in Python with the Keras deep learning library. After completing this project youknow: Where to download the ASCII text for classical books for free that you can use for training. How to train an LSTM network on text sequences and how to use the trained network togenerate new sequences. How to develop stacked LSTM networks and lift the performance of the model. 28. 6. 1 Next This tutorial concludes Part VI and your introduction to recurrent neural networks in Keras. Next in Part VII we will conclude this book and highlight additional resources that you can useto continue your studies.
Part VIIConclusions 242
Chapter 29How Far You Have Come You made it. Well done. Take a moment and look back at how far you have come. You started o↵with an interest in deep learning and a strong desire to be able to practiceand apply deep learning using Python. You downloaded, installed and started using Keras, perhaps for the first time, and startedto get familiar with how to develop neural network models with the library. Slowly and steadily over the course of a number of lessons you learned how to use thevarious di↵erent features of the Keras library on neural network and deeper models forclassical tabular, image and textual data. Building upon the recipes for common machine learning tasks you worked through yourfirst machine learning problems with using deep learning models end-to-end using Python. Using standard templates, the recipes and experience you have gathered, you are nowcapable of working through new and di↵erent predictive modeling machine learningproblems with deep learning on your own. Don't make light of this. You have come a long way in a short amount of time. You havedeveloped the important and valuable skill of being able to work through machine learningproblems with deep learning end-to-end using Python. This is a platform that is used by amajority of working data scientist professionals. The sky is the limit. Iw a n tt ot a k eam o m e n ta n ds i n c e r e l yt h a n ky o uf o rl e t t i n gm eh e l py o us t a r ty o u rd e e plearning journey with in Python. I hope you keep learning and have fun as you continue tomaster machine learning. 243
Chapter 30Getting More Help This book has given you a foundation for applying deep learning in your own machine learningprojects, but there is still a lot more to learn. In this chapter you will discover the places thatyou can go to get more help with deep learning, the Keras library as well as neural networks ingeneral. 30. 1 Artificial Neural Networks The field of neural networks has been around for decades. As such there are a wealth of papers,books and websites on the topic. Below are a few books that I recommend if you are looking forad e e p e rb a c k g r o u n di nt h efi e l d. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networkshttp://amzn. to/1p ZDFn0 Neural Networks for Pattern Recognitionhttp://amzn. to/1W7J8GQ An Introduction to Neural Networkshttp://amzn. to/1p ZDFTP30. 2 Deep Learning Deep learning is a new field. Unfortunately, resources on the topic are predominately academicfocused rather than practical. Nevertheless, if you are looking to go deeper into the field of deeplearning, below are some suggested resources. Deep Learning, a soon to be published textbook on deep learning by some of the pioneersin the field. http://www. deeplearningbook. org Learning Deep Architectures for AI (2009) provides a good but academic introductionpaper to the field. http://goo. gl/Mk Ut6B244
30. 3. Python Machine Learning245 Deep Learning in Neural Networks: An Overview (2014), another excellent but academicintroduction paper to the field. http://arxiv. org/abs/1404. 782830. 3 Python Machine Learning Python is a growing platform for applied machine learning. The strong attraction is because Python is a fully featured programming language (unlike R) and as such you can use the samecode and libraries in developing your model as you use to deploy the model into operations. The premier machine learning library in Python is scikit-learn built on top of Sci Py. Visit the scikit-learn home page to learn more about the library and it's capabilities. http://scikit-learn. org Visit the Sci Py home page to learn more about the Sci Py platform for scientific computingin Python. http://scipy. org Machine Learning Mastery with Python, the precursor to this book. https://machinelearningmastery. com/machine-learning-with-python30. 4 Keras Library Keras is a fantastic but fast moving library. Large updates are still being made to the API andit is good to stay abreast of changes. You also need to know where you can ask questions to getmore help with the platform. The Keras homepage is an excellent place to start, giving you full access to the userdocumentation. http://keras. io The Keras blog provides updates and tutorials on the platform. http://blog. keras. io The Keras project on Git Hub hosts the code for the library. Importantly, you can use theissue tracker on the project to raise concerns and make feature requests after you haveused the library. https://github. com/fchollet/keras The best place to get answers to your Keras questions is on the Google group email list. https://groups. google. com/forum/#!forum/keras-users A good secondary place to ask questions is Stack Overflow and use the Keras tag withyour question. http://stackoverflow. com/questions/tagged/keras Ia ma l w a y sh e r et oh e l pi fh a v ea n yq u e s t i o n s. Y o uc a ne m a i lm ed i r e c t l yv i ajason@Machine Learning Mastery. comand put this book title in the subject of your email.