markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
The syllables of the word `ભાવના` will thus be: | print(gujarati_syllables) | ['ભા', 'વ', 'ના']
| MIT | languages/south_asia/Gujarati_tutorial.ipynb | glaserti/tutorials |
Project 3: Implement SLAM --- Project OverviewIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. > `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the worldYou can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:```mu = matrix([[Px0], [Py0], [Px1], [Py1], [Lx0], [Ly0], [Lx1], [Ly1]])```You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector. Generating an environmentIn a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.--- Create the worldUse the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! `data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`. Helper functionsYou will be working with the `robot` class that may look familiar from the first notebook, In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook. | import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance) |
Landmarks: [[12, 44], [62, 98], [19, 13], [45, 12], [7, 97]]
Robot: [x=69.61429 y=95.52181]
| MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
A note on `make_data`The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:1. Instantiating a robot (using the robot class)2. Creating a grid world with landmarks in it**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:```measurement = data[i][0]motion = data[i][1]``` | # print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1]) | Example measurements:
[[0, -38.94955155697709, -7.2954814723926384], [1, 11.679250951477753, 46.597074026819655], [2, -30.450451619432496, -37.41378043748835], [3, -4.896442127766177, -38.434283116881524], [4, -43.08341118340028, 47.17699212819607]]
Example motion:
[-15.396274422511562, -12.765372454680524]
| MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam. Initialize ConstraintsOne of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.You may also choose to create two of each omega and xi (one for x and one for y positions). TODO: Write a function that initializes omega and xiComplete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!* | def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
omega = np.zeros((2*N + 2*num_landmarks, 2*N + 2*num_landmarks))
omega[0,0] = 1
omega[1,1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros((2*N + 2*num_landmarks, 1))
xi[0] = world_size/2
xi[1] = world_size/2
return omega, xi
| _____no_output_____ | MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Test as you goIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`. | # import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5) | _____no_output_____ | MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
--- SLAM inputs In addition to `data`, your slam function will also take in:* N - The number of time steps that a robot will be moving and sensing* num_landmarks - The number of landmarks in the world* world_size - The size (w/h) of your world* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise` A note on noiseRecall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`. TODO: Implement Graph SLAMFollow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! Updating with motion and measurementsWith a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!** | ## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
for t in range(N-1):
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
#print("data: ", len(data), data[t][0])
measurements = data[t][0]
for m in measurements:
Lnum = m[0]
Ldx = m[1]
Ldy = m[2]
omega[2*t+0] [2*t+0] += 1/measurement_noise
omega[2*t+1] [2*t+1] += 1/measurement_noise
omega[2*t+0] [2*(N+Lnum)+0] += -1/measurement_noise
omega[2*t+1] [2*(N+Lnum)+1] += -1/measurement_noise
omega[2*(N+Lnum)+0][2*t+0] += -1/measurement_noise
omega[2*(N+Lnum)+1][2*t+1] += -1/measurement_noise
omega[2*(N+Lnum)+0][2*(N+Lnum)+0] += 1/measurement_noise
omega[2*(N+Lnum)+1][2*(N+Lnum)+1] += 1/measurement_noise
xi[2*t+0] += -Ldx/measurement_noise
xi[2*t+1] += -Ldy/measurement_noise
xi[2*(N+Lnum)+0] += Ldx/measurement_noise
xi[2*(N+Lnum)+1] += Ldy/measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
motion = data[t][1]
omega[2*t+0][2*t+0] += 1/motion_noise
omega[2*t+1][2*t+1] += 1/motion_noise
omega[2*t+0][2*t+2] += -1/motion_noise
omega[2*t+1][2*t+3] += -1/motion_noise
omega[2*t+2][2*t+0] += -1/motion_noise
omega[2*t+3][2*t+1] += -1/motion_noise
omega[2*t+2][2*t+2] += 1/motion_noise
omega[2*t+3][2*t+3] += 1/motion_noise
xi[2*t+0] += -motion[0]/motion_noise
xi[2*t+2] += motion[0]/motion_noise
xi[2*t+1] += -motion[1]/motion_noise
xi[2*t+3] += motion[1]/motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
mu = np.linalg.inv(np.matrix(omega)) * xi
return mu # return `mu`
| _____no_output_____ | MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Helper functionsTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. Then, we define a function that nicely print out these lists; both of these we will call, in the next step. | # a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
| _____no_output_____ | MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Run SLAMOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks! What to ExpectThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.With these values in mind, you should expect to see a result that displays two lists:1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. Landmark LocationsIf you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement). | # call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks) |
Estimated Poses:
[50.000, 50.000]
[35.859, 35.926]
[21.364, 23.942]
[6.980, 11.344]
[24.945, 20.405]
[43.518, 30.202]
[62.058, 37.373]
[79.693, 44.655]
[95.652, 52.956]
[77.993, 43.819]
[60.450, 33.659]
[41.801, 24.066]
[23.993, 15.292]
[7.068, 7.322]
[23.995, -0.325]
[32.465, 17.730]
[41.235, 37.599]
[50.421, 57.362]
[59.424, 75.357]
[67.357, 93.716]
Estimated Landmarks:
[11.692, 44.036]
[61.744, 96.855]
[19.061, 12.781]
[44.483, 11.522]
[6.063, 96.744]
| MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Visualize the constructed worldFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.** | # import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks) | Last pose: (67.35712814937992, 93.71611790835976)
| MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters. **Answer**: The true value of the final pose is [x=69.61429 y=95.52181], and it is close to the estimated pose [67.357, 93.716] in my slam implementation. And the true landmarks are [12, 44], [62, 98], [19, 13], [45, 12], [7, 97] while the estimated are [11.692, 44.036], [61.744, 96.855], [19.061, 12.781], [44.483, 11.522], [6.063, 96.744].If we moved and sensed more, the results becomes more accurate. And if we had lower noise parameters, then I can have more acculate results than higher noise parameters. TestingTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix. Submit your projectIf you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit! | # Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
|
Estimated Poses:
[50.000, 50.000]
[69.181, 45.665]
[87.743, 39.703]
[76.270, 56.311]
[64.317, 72.176]
[52.257, 88.154]
[44.059, 69.401]
[37.002, 49.918]
[30.924, 30.955]
[23.508, 11.419]
[34.180, 27.133]
[44.155, 43.846]
[54.806, 60.920]
[65.698, 78.546]
[77.468, 95.626]
[96.802, 98.821]
[75.957, 99.971]
[70.200, 81.181]
[64.054, 61.723]
[58.107, 42.628]
Estimated Landmarks:
[76.779, 42.887]
[85.065, 77.438]
[13.548, 95.652]
[59.449, 39.595]
[69.263, 94.240]
| MIT | 3. Landmark Detection and Tracking.ipynb | mitsunami/SLAM |
In this notebook we investigate a designed simple Inception network on PDU data | %reload_ext autoreload
%autoreload 2
%matplotlib inline | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Importing the libraries | import torch
import torch.nn as nn
import torch.utils.data as Data
from torch.autograd import Function, Variable
from torch.optim import lr_scheduler
import torchvision
import torchvision.transforms as transforms
import torch.backends.cudnn as cudnn
from pathlib import Path
import os
import copy
import math
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime
import time as time
import warnings | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Checking whether the GPU is active | torch.backends.cudnn.enabled
torch.cuda.is_available()
torch.cuda.init() | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Dataset paths | PATH = Path("/home/saman/Saman/data/PDU_Raw_Data01/Test06_600x30/")
train_path = PATH / 'train' / 'Total'
valid_path = PATH / 'valid' / 'Total'
test_path = PATH / 'test' / 'Total' | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Model parameters | Num_Filter1= 16
Num_Filter2= 64
Ker_Sz1 = 5
Ker_Sz2 = 5
learning_rate= 0.0001
Dropout= 0.2
BchSz= 32
EPOCH= 5 | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Data Augmenation | # Mode of transformation
transformation = transforms.Compose([
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0,0,0), (0.5,0.5,0.5)),
])
transformation2 = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0,0,0), (0.5,0.5,0.5)),
])
# Loss calculator
criterion = nn.CrossEntropyLoss() # cross entropy loss | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Defining models Defining a class of our simple model | class ConvNet(nn.Module):
def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d( # input shape (3, 30, 600)
in_channels=3, # input height
out_channels=Num_Filter1, # n_filters
kernel_size=Ker_Sz1, # Kernel size
stride=1, # filter movement/step
padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,
), # padding=(kernel_size-1)/2 if stride=1
nn.BatchNorm2d(Num_Filter1), # Batch Normalization
nn.ReLU(), # Rectified linear activation
nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area,
# Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
self.layer2 = nn.Sequential(
nn.Conv2d(Num_Filter1, Num_Filter2,
kernel_size=Ker_Sz2,
stride=1,
padding=int((Ker_Sz2-1)/2)),
nn.BatchNorm2d(Num_Filter2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)
nn.Dropout2d(p=Dropout))
self.fc = nn.Linear(1050*Num_Filter2, num_classes) # fully connected layer, output 2 classes
def forward(self, x): # Forwarding the data to classifier
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)
out = self.fc(out)
return out | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Defining inception classes | class BasicConv2d(nn.Module):
def __init__(self, in_planes, out_planes, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_planes, out_planes, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_planes, eps=0.001)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
out = self.relu(x)
return x
class Inception(nn.Module):
def __init__(self, in_channels):
super(Inception, self).__init__()
self.branch3x3 = BasicConv2d(in_channels, 384, kernel_size=3, stride=2)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, stride=2)
self.avgpool = nn.AvgPool2d(kernel_size=3, stride=2)
def forward(self, x):
branch3x3 = self.branch3x3(x)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = self.avgpool(x)
outputs = [branch3x3, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
class Inception_Net(nn.Module):
def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):
super(Inception_Net, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d( # input shape (3, 30, 600)
in_channels=3, # input height
out_channels=Num_Filter1, # n_filters
kernel_size=Ker_Sz1, # Kernel size
stride=1, # filter movement/step
padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,
), # padding=(kernel_size-1)/2 if stride=1
nn.BatchNorm2d(Num_Filter1), # Batch Normalization
nn.ReLU(), # Rectified linear activation
nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area,
# Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
self.layer2 = nn.Sequential(
nn.Conv2d(Num_Filter1, Num_Filter2,
kernel_size=Ker_Sz2,
stride=1,
padding=int((Ker_Sz2-1)/2)),
nn.BatchNorm2d(Num_Filter2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)
nn.Dropout2d(p=Dropout))
self.Inception = Inception(Num_Filter2)
self.fc = nn.Linear(120768, num_classes) # fully connected layer, output 2 classes
def forward(self, x): # Forwarding the data to classifier
out = self.layer1(x)
out = self.layer2(out)
out = self.Inception(out)
out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)
out = self.fc(out)
return out | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Finding number of parameter in our model | def print_num_params(model):
TotalParam=0
for param in list(model.parameters()):
print("Individual parameters are:")
nn=1
for size in list(param.size()):
print(size)
nn = nn*size
print("Total parameters: {}" .format(param.numel()))
TotalParam += nn
print('-' * 10)
print("Sum of all Parameters is: {}" .format(TotalParam))
def get_num_params(model):
TotalParam=0
for param in list(model.parameters()):
nn=1
for size in list(param.size()):
nn = nn*size
TotalParam += nn
return TotalParam | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Training and Validating Training and validation function | def train_model(model, criterion, optimizer, Dropout, learning_rate, BATCHSIZE, num_epochs):
print(str(datetime.now()).split('.')[0], "Starting training and validation...\n")
print("====================Data and Hyperparameter Overview====================\n")
print("Number of training examples: {} , Number of validation examples: {} \n".format(len(train_data), len(valid_data)))
print("Dropout:{:,.2f}, Learning rate: {:,.5f} "
.format( Dropout, learning_rate ))
print("Batch size: {}, Number of epochs: {} "
.format(BATCHSIZE, num_epochs))
print("Number of parameter in the model: {}". format(get_num_params(model)))
print("================================Results...==============================\n")
since = time.time() #record the beginning time
best_model = model
best_acc = 0.0
acc_vect =[]
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images).cuda()
labels = Variable(labels).cuda()
# Forward pass
outputs = model(images) # model output
loss = criterion(outputs, labels) # cross entropy loss
# Trying binary cross entropy
#loss = criterion(torch.max(outputs.data, 1), labels)
#loss = torch.nn.functional.binary_cross_entropy(outputs, labels)
# Backward and optimize
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if (i+1) % 1000 == 0: # Reporting the loss and progress every 50 step
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, len(train_loader), loss.item()))
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in valid_loader:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss += loss.item()
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss= loss / total
epoch_acc = 100 * correct / total
acc_vect.append(epoch_acc)
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
print('Validation accuracy and loss of the model on {} images: {} %, {:.5f}'
.format(len(valid_data), 100 * correct / total, loss))
correct = 0
total = 0
for images, labels in train_loader:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss += loss.item()
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss= loss / total
epoch_acc = 100 * correct / total
print('Train accuracy and loss of the model on {} images: {} %, {:.5f}'
.format(len(train_data), epoch_acc, loss))
print('-' * 10)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best validation Acc: {:4f}'.format(best_acc))
mean_acc = np.mean(acc_vect)
print('Average accuracy on the validation {} images: {}'
.format(len(train_data),mean_acc))
print('-' * 10)
return best_model, mean_acc | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Testing function | def test_model(model, test_loader):
print("Starting testing...\n")
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
test_loss_vect=[]
test_acc_vect=[]
since = time.time() #record the beginning time
for i in range(10):
Indx = torch.randperm(len(test_data))
Cut=int(len(Indx)/10) # Here 10% showing the proportion of data is chosen for pooling
indices=Indx[:Cut]
Sampler = Data.SubsetRandomSampler(indices)
pooled_data = torch.utils.data.DataLoader(test_data , batch_size=BchSz,sampler=Sampler)
for images, labels in pooled_data:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
total += labels.size(0)
correct += (predicted == labels).sum().item()
test_loss= loss / total
test_accuracy= 100 * correct / total
test_loss_vect.append(test_loss)
test_acc_vect.append(test_accuracy)
# print('Test accuracy and loss for the {}th pool: {:.2f} %, {:.5f}'
# .format(i+1, test_accuracy, test_loss))
mean_test_loss = np.mean(test_loss_vect)
mean_test_acc = np.mean(test_acc_vect)
std_test_acc = np.std(test_acc_vect)
print('-' * 10)
print('Average test accuracy on test data: {:.2f} %, loss: {:.5f}, Standard deviion of accuracy: {:.4f}'
.format(mean_test_acc, mean_test_loss, std_test_acc))
print('-' * 10)
time_elapsed = time.time() - since
print('Testing complete in {:.1f}m {:.4f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('-' * 10)
return mean_test_acc, mean_test_loss, std_test_acc | _____no_output_____ | MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Applying aumentation and batch size | ## Using batch size to load data
train_data = torchvision.datasets.ImageFolder(train_path,transform=transformation)
train_loader =torch.utils.data.DataLoader(train_data, batch_size=BchSz, shuffle=True,
num_workers=8)
valid_data = torchvision.datasets.ImageFolder(valid_path,transform=transformation)
valid_loader =torch.utils.data.DataLoader(valid_data, batch_size=BchSz, shuffle=True,
num_workers=8)
test_data = torchvision.datasets.ImageFolder(test_path,transform=transformation2)
test_loader =torch.utils.data.DataLoader(test_data, batch_size=BchSz, shuffle=True,
num_workers=8)
model = Inception_Net(Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2)
model = model.cuda()
print(model)
# Defining optimizer with variable learning rate
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
optimizer.scheduler=lr_scheduler.ReduceLROnPlateau(optimizer, 'min')
get_num_params(model)
seed= [1, 3, 7, 19, 22]
val_acc_vect=[]
test_acc_vect=[]
for ii in seed:
torch.cuda.manual_seed(ii)
torch.manual_seed(ii)
model, val_acc= train_model(model, criterion, optimizer, Dropout, learning_rate, BchSz, EPOCH)
testing = test_model (model, test_loader)
test_acc= testing[0]
val_acc_vect.append( val_acc )
test_acc_vect.append(test_acc)
mean_val_acc = np.mean(val_acc_vect)
mean_test_acc = np.mean(test_acc_vect)
print('-' * 10)
print('-' * 10)
print('Average of validation accuracies on 5 different random seed: {:.2f} %, Average of testing accuracies on 5 different random seed: {:.2f} %'
.format(mean_val_acc, mean_test_acc))
| 2019-03-01 15:11:27 Starting training and validation...
====================Data and Hyperparameter Overview====================
Number of training examples: 24000 , Number of validation examples: 8000
Dropout:0.20, Learning rate: 0.00010
Batch size: 32, Number of epochs: 5
Number of parameter in the model: 633378
================================Results...==============================
Validation accuracy and loss of the model on 8000 images: 64.9 %, 1.33086
Train accuracy and loss of the model on 24000 images: 62.7375 %, 1.01242
----------
Validation accuracy and loss of the model on 8000 images: 75.4 %, 0.76369
Train accuracy and loss of the model on 24000 images: 77.225 %, 1.38264
----------
Validation accuracy and loss of the model on 8000 images: 77.35 %, 1.22606
Train accuracy and loss of the model on 24000 images: 87.25833333333334 %, 0.64452
----------
Validation accuracy and loss of the model on 8000 images: 72.8875 %, 0.65668
Train accuracy and loss of the model on 24000 images: 88.3125 %, 0.52884
----------
Validation accuracy and loss of the model on 8000 images: 79.6875 %, 1.17200
Train accuracy and loss of the model on 24000 images: 95.64583333333333 %, 0.63624
----------
Training complete in 1m 55s
Best validation Acc: 79.687500
Average accuracy on the validation 24000 images: 74.045
----------
Starting testing...
----------
Average test accuracy on test data: 77.27 %, loss: 0.00026, Standard deviion of accuracy: 0.7046
----------
Testing complete in 0.0m 5.8832s
----------
2019-03-01 15:13:28 Starting training and validation...
====================Data and Hyperparameter Overview====================
Number of training examples: 24000 , Number of validation examples: 8000
Dropout:0.20, Learning rate: 0.00010
Batch size: 32, Number of epochs: 5
Number of parameter in the model: 633378
================================Results...==============================
Validation accuracy and loss of the model on 8000 images: 80.275 %, 0.75893
Train accuracy and loss of the model on 24000 images: 95.59583333333333 %, 0.11324
----------
Validation accuracy and loss of the model on 8000 images: 79.4 %, 1.01741
Train accuracy and loss of the model on 24000 images: 95.62916666666666 %, 0.20947
----------
Validation accuracy and loss of the model on 8000 images: 80.3875 %, 0.54221
Train accuracy and loss of the model on 24000 images: 95.6375 %, 0.08113
----------
Validation accuracy and loss of the model on 8000 images: 79.375 %, 0.50299
Train accuracy and loss of the model on 24000 images: 95.59583333333333 %, 0.42088
----------
Validation accuracy and loss of the model on 8000 images: 80.075 %, 2.54078
Train accuracy and loss of the model on 24000 images: 95.75416666666666 %, 0.24887
----------
Training complete in 1m 55s
Best validation Acc: 80.387500
Average accuracy on the validation 24000 images: 79.9025
----------
Starting testing...
----------
Average test accuracy on test data: 76.61 %, loss: 0.00041, Standard deviion of accuracy: 0.4764
----------
Testing complete in 0.0m 5.7241s
----------
2019-03-01 15:15:28 Starting training and validation...
====================Data and Hyperparameter Overview====================
Number of training examples: 24000 , Number of validation examples: 8000
Dropout:0.20, Learning rate: 0.00010
Batch size: 32, Number of epochs: 5
Number of parameter in the model: 633378
================================Results...==============================
Validation accuracy and loss of the model on 8000 images: 80.0625 %, 1.32076
Train accuracy and loss of the model on 24000 images: 95.54166666666667 %, 0.43024
----------
Validation accuracy and loss of the model on 8000 images: 79.8875 %, 0.41576
Train accuracy and loss of the model on 24000 images: 95.54166666666667 %, 0.24901
----------
Validation accuracy and loss of the model on 8000 images: 79.575 %, 1.62173
Train accuracy and loss of the model on 24000 images: 95.81666666666666 %, 0.24963
----------
Validation accuracy and loss of the model on 8000 images: 79.925 %, 2.40927
Train accuracy and loss of the model on 24000 images: 95.60833333333333 %, 0.15915
----------
Validation accuracy and loss of the model on 8000 images: 80.1 %, 1.71480
Train accuracy and loss of the model on 24000 images: 95.70416666666667 %, 0.18263
----------
Training complete in 1m 54s
Best validation Acc: 80.100000
Average accuracy on the validation 24000 images: 79.91
----------
Starting testing...
----------
Average test accuracy on test data: 76.58 %, loss: 0.00036, Standard deviion of accuracy: 0.3228
----------
Testing complete in 0.0m 5.7930s
----------
2019-03-01 15:17:29 Starting training and validation...
====================Data and Hyperparameter Overview====================
Number of training examples: 24000 , Number of validation examples: 8000
Dropout:0.20, Learning rate: 0.00010
Batch size: 32, Number of epochs: 5
Number of parameter in the model: 633378
================================Results...==============================
Validation accuracy and loss of the model on 8000 images: 79.85 %, 1.32361
Train accuracy and loss of the model on 24000 images: 95.53333333333333 %, 0.22441
----------
Validation accuracy and loss of the model on 8000 images: 80.225 %, 1.95208
Train accuracy and loss of the model on 24000 images: 95.63333333333334 %, 0.08277
----------
Validation accuracy and loss of the model on 8000 images: 79.425 %, 1.50681
Train accuracy and loss of the model on 24000 images: 95.70416666666667 %, 0.11324
----------
Validation accuracy and loss of the model on 8000 images: 80.0625 %, 1.03933
Train accuracy and loss of the model on 24000 images: 95.58333333333333 %, 0.67020
----------
Validation accuracy and loss of the model on 8000 images: 79.875 %, 0.84893
Train accuracy and loss of the model on 24000 images: 95.52083333333333 %, 0.12579
----------
Training complete in 1m 55s
Best validation Acc: 80.225000
Average accuracy on the validation 24000 images: 79.8875
----------
Starting testing...
----------
Average test accuracy on test data: 76.76 %, loss: 0.00031, Standard deviion of accuracy: 0.6555
----------
Testing complete in 0.0m 5.8354s
----------
2019-03-01 15:19:29 Starting training and validation...
====================Data and Hyperparameter Overview====================
Number of training examples: 24000 , Number of validation examples: 8000
Dropout:0.20, Learning rate: 0.00010
Batch size: 32, Number of epochs: 5
Number of parameter in the model: 633378
================================Results...==============================
Validation accuracy and loss of the model on 8000 images: 79.7625 %, 1.31404
Train accuracy and loss of the model on 24000 images: 95.51666666666667 %, 0.21090
----------
Validation accuracy and loss of the model on 8000 images: 79.3125 %, 0.71353
Train accuracy and loss of the model on 24000 images: 95.7 %, 0.29437
----------
Validation accuracy and loss of the model on 8000 images: 79.975 %, 0.97653
Train accuracy and loss of the model on 24000 images: 95.67083333333333 %, 0.10430
----------
Validation accuracy and loss of the model on 8000 images: 79.4375 %, 1.69258
Train accuracy and loss of the model on 24000 images: 95.55 %, 0.14140
----------
Validation accuracy and loss of the model on 8000 images: 80.075 %, 1.34002
Train accuracy and loss of the model on 24000 images: 95.53333333333333 %, 0.33516
----------
Training complete in 1m 57s
Best validation Acc: 80.075000
Average accuracy on the validation 24000 images: 79.71249999999999
----------
Starting testing...
----------
Average test accuracy on test data: 76.57 %, loss: 0.00025, Standard deviion of accuracy: 0.3669
----------
Testing complete in 0.0m 5.8700s
----------
----------
----------
Average of validation accuracies on 5 different random seed: 78.69 %, Average of testing accuracies on 5 different random seed: 76.76 %
| MIT | Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb | Saman689/Weed-sensing-basics |
Import all needed package | import os
import ast
import numpy as np
import pandas as pd
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense, Activation, LSTM, Dropout
from keras.utils import to_categorical
from keras.datasets import mnist
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from tensorflow.python.keras.callbacks import ModelCheckpoint, TensorBoard
import context
build = context.build_promoter
construct = context.construct_neural_net
encode = context.encode_sequences
organize = context.organize_data
ROOT_DIR = os.getcwd()[:os.getcwd().rfind('Express')] + 'ExpressYeaself/'
SAVE_DIR = ROOT_DIR + 'expressyeaself/models/lstm/saved_models/'
ROOT_DIR | Using TensorFlow backend.
| MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Define the input data Using the full data set | sample_filename = ('10000_from_20190612130111781831_percentiles_els_binarized_homogeneous_deflanked_'
'sequences_with_exp_levels.txt.gz') | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Define the absolute path | sample_path = ROOT_DIR + 'example/processed_data/' + sample_filename | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Encode sequences | # Seems to give slightly better accuracy when expression level values aren't scaled.
scale_els = False
X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)
num_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path) | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Bulid the 3 dimensions LSTM model Reshape encoded sequences | X_padded = X_padded.reshape(-1)
X_padded = X_padded.reshape(int(num_seqs), 1, 5 * int(max_sequence_len)) | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Reshape expression levels | y_scaled = y_scaled.reshape(len(y_scaled), 1, 1) | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Perform a train-test split | test_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size) | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Build the model | # Define the model parameters
batch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data
epochs = 50
dropout = 0.3
learning_rate = 0.01
# Define the checkpointer to allow saving of models
model_type = 'lstm_sequential_3d_onehot'
save_path = SAVE_DIR + model_type + '.hdf5'
checkpointer = ModelCheckpoint(monitor='val_acc',
filepath=save_path,
verbose=1,
save_best_only=True)
# Define the model
model = Sequential()
# Build up the layers
model.add(Dense(1024, kernel_initializer='uniform', input_shape=(1,5*int(max_sequence_len),)))
model.add(Activation('softmax'))
model.add(Dropout(dropout))
# model.add(Dense(512, kernel_initializer='uniform', input_shape=(1,1024,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
model.add(Dense(256, kernel_initializer='uniform', input_shape=(1,512,)))
model.add(Activation('softmax'))
model.add(Dropout(dropout))
# model.add(Dense(128, kernel_initializer='uniform', input_shape=(1,256,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(64, kernel_initializer='uniform', input_shape=(1,128,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(32, kernel_initializer='uniform', input_shape=(1,64,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(16, kernel_initializer='uniform', input_shape=(1,32,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(8, kernel_initializer='uniform', input_shape=(1,16,)))
# model.add(Activation('softmax'))
model.add(LSTM(units=1, return_sequences=True))
sgd = optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
# Compile the model
model.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy'])
# Print model summary
print(model.summary())
# model.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))
# model.add(Dropout(dropout))
# model.add(Dense(50, activation='sigmoid'))
# # model.add(Dense(25, activation='sigmoid'))
# # model.add(Dense(12, activation='sigmoid'))
# # model.add(Dense(6, activation='sigmoid'))
# # model.add(Dense(3, activation='sigmoid'))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mse',
# optimizer='rmsprop',
# metrics=['accuracy'])
# print(model.summary()) | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_87 (Dense) (None, 1, 1024) 410624
_________________________________________________________________
activation_69 (Activation) (None, 1, 1024) 0
_________________________________________________________________
dropout_72 (Dropout) (None, 1, 1024) 0
_________________________________________________________________
dense_88 (Dense) (None, 1, 256) 262400
_________________________________________________________________
activation_70 (Activation) (None, 1, 256) 0
_________________________________________________________________
dropout_73 (Dropout) (None, 1, 256) 0
_________________________________________________________________
lstm_25 (LSTM) (None, 1, 1) 1032
=================================================================
Total params: 674,056
Trainable params: 674,056
Non-trainable params: 0
_________________________________________________________________
None
| MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Fit and Evaluate the model | # Fit
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=eposhs,verbose=1,
validation_data=(X_test, y_test), callbacks=[checkpointer])
# Evaluate
score = max(history.history['val_acc'])
print("%s: %.2f%%" % (model.metrics_names[1], score*100))
plt = construct.plot_results(history.history)
plt.show() | Train on 7500 samples, validate on 2500 samples
Epoch 1/500
7500/7500 [==============================] - 4s 594us/step - loss: 0.4805 - acc: 0.4929 - val_loss: 0.4735 - val_acc: 0.4740
Epoch 00001: val_acc improved from -inf to 0.47400, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 2/500
7500/7500 [==============================] - 2s 252us/step - loss: 0.4334 - acc: 0.4929 - val_loss: 0.4249 - val_acc: 0.4740
Epoch 00002: val_acc did not improve from 0.47400
Epoch 3/500
7500/7500 [==============================] - 2s 283us/step - loss: 0.3886 - acc: 0.4929 - val_loss: 0.3805 - val_acc: 0.4740
Epoch 00003: val_acc did not improve from 0.47400
Epoch 4/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.3497 - acc: 0.4929 - val_loss: 0.3427 - val_acc: 0.4740
Epoch 00004: val_acc did not improve from 0.47400
Epoch 5/500
7500/7500 [==============================] - 2s 291us/step - loss: 0.3178 - acc: 0.4929 - val_loss: 0.3123 - val_acc: 0.4740
Epoch 00005: val_acc did not improve from 0.47400
Epoch 6/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.2933 - acc: 0.4929 - val_loss: 0.2895 - val_acc: 0.4740
Epoch 00006: val_acc did not improve from 0.47400
Epoch 7/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.2750 - acc: 0.4929 - val_loss: 0.2725 - val_acc: 0.4740
Epoch 00007: val_acc did not improve from 0.47400
Epoch 8/500
7500/7500 [==============================] - 2s 295us/step - loss: 0.2625 - acc: 0.4929 - val_loss: 0.2615 - val_acc: 0.4740
Epoch 00008: val_acc did not improve from 0.47400
Epoch 9/500
7500/7500 [==============================] - 2s 240us/step - loss: 0.2555 - acc: 0.4929 - val_loss: 0.2549 - val_acc: 0.4740
Epoch 00009: val_acc did not improve from 0.47400
Epoch 10/500
7500/7500 [==============================] - 2s 250us/step - loss: 0.2520 - acc: 0.4923 - val_loss: 0.2518 - val_acc: 0.4740
Epoch 00010: val_acc did not improve from 0.47400
Epoch 11/500
7500/7500 [==============================] - 2s 225us/step - loss: 0.2506 - acc: 0.4968 - val_loss: 0.2504 - val_acc: 0.4740
Epoch 00011: val_acc did not improve from 0.47400
Epoch 12/500
7500/7500 [==============================] - 2s 247us/step - loss: 0.2500 - acc: 0.5005 - val_loss: 0.2500 - val_acc: 0.5260
Epoch 00012: val_acc improved from 0.47400 to 0.52600, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 13/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.2499 - acc: 0.5023 - val_loss: 0.2498 - val_acc: 0.5260
Epoch 00013: val_acc did not improve from 0.52600
Epoch 14/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.2503 - acc: 0.4987 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00014: val_acc did not improve from 0.52600
Epoch 15/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.2502 - acc: 0.4951 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00015: val_acc did not improve from 0.52600
Epoch 16/500
7500/7500 [==============================] - 2s 221us/step - loss: 0.2500 - acc: 0.5064 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00016: val_acc did not improve from 0.52600
Epoch 17/500
7500/7500 [==============================] - 2s 214us/step - loss: 0.2502 - acc: 0.5005 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00017: val_acc did not improve from 0.52600
Epoch 18/500
7500/7500 [==============================] - 2s 214us/step - loss: 0.2501 - acc: 0.5081 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00018: val_acc did not improve from 0.52600
Epoch 19/500
7500/7500 [==============================] - 2s 217us/step - loss: 0.2499 - acc: 0.5017 - val_loss: 0.2497 - val_acc: 0.5260
Epoch 00019: val_acc did not improve from 0.52600
Epoch 20/500
7500/7500 [==============================] - 2s 213us/step - loss: 0.2504 - acc: 0.4905 - val_loss: 0.2496 - val_acc: 0.5260
Epoch 00020: val_acc did not improve from 0.52600
Epoch 21/500
7500/7500 [==============================] - 2s 215us/step - loss: 0.2497 - acc: 0.5159 - val_loss: 0.2493 - val_acc: 0.5260
Epoch 00021: val_acc did not improve from 0.52600
Epoch 22/500
7500/7500 [==============================] - 2s 214us/step - loss: 0.2497 - acc: 0.5107 - val_loss: 0.2491 - val_acc: 0.5260
Epoch 00022: val_acc did not improve from 0.52600
Epoch 23/500
7500/7500 [==============================] - 2s 213us/step - loss: 0.2491 - acc: 0.5211 - val_loss: 0.2486 - val_acc: 0.5260
Epoch 00023: val_acc did not improve from 0.52600
Epoch 24/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.2485 - acc: 0.5396 - val_loss: 0.2478 - val_acc: 0.5260
Epoch 00024: val_acc did not improve from 0.52600
Epoch 25/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.2474 - acc: 0.5551 - val_loss: 0.2466 - val_acc: 0.6284
Epoch 00025: val_acc improved from 0.52600 to 0.62840, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 26/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.2456 - acc: 0.5913 - val_loss: 0.2449 - val_acc: 0.7040
Epoch 00026: val_acc improved from 0.62840 to 0.70400, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 27/500
7500/7500 [==============================] - 2s 242us/step - loss: 0.2435 - acc: 0.6288 - val_loss: 0.2425 - val_acc: 0.7100
Epoch 00027: val_acc improved from 0.70400 to 0.71000, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 28/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.2404 - acc: 0.6519 - val_loss: 0.2394 - val_acc: 0.7168
Epoch 00028: val_acc improved from 0.71000 to 0.71680, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 29/500
7500/7500 [==============================] - 2s 245us/step - loss: 0.2368 - acc: 0.6575 - val_loss: 0.2355 - val_acc: 0.7160
Epoch 00029: val_acc did not improve from 0.71680
Epoch 30/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.2324 - acc: 0.6643 - val_loss: 0.2312 - val_acc: 0.7220
Epoch 00030: val_acc improved from 0.71680 to 0.72200, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 31/500
7500/7500 [==============================] - 2s 247us/step - loss: 0.2273 - acc: 0.6667 - val_loss: 0.2258 - val_acc: 0.7208
Epoch 00031: val_acc did not improve from 0.72200
Epoch 32/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.2228 - acc: 0.6655 - val_loss: 0.2205 - val_acc: 0.7228
Epoch 00032: val_acc improved from 0.72200 to 0.72280, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 33/500
7500/7500 [==============================] - 2s 245us/step - loss: 0.2178 - acc: 0.6604 - val_loss: 0.2146 - val_acc: 0.7216
Epoch 00033: val_acc did not improve from 0.72280
Epoch 34/500
7500/7500 [==============================] - 2s 219us/step - loss: 0.2124 - acc: 0.6667 - val_loss: 0.2095 - val_acc: 0.7196
Epoch 00034: val_acc did not improve from 0.72280
Epoch 35/500
7500/7500 [==============================] - 2s 222us/step - loss: 0.2074 - acc: 0.6747 - val_loss: 0.2049 - val_acc: 0.7208
Epoch 00035: val_acc did not improve from 0.72280
Epoch 36/500
7500/7500 [==============================] - 2s 245us/step - loss: 0.2043 - acc: 0.6691 - val_loss: 0.2012 - val_acc: 0.7232
Epoch 00036: val_acc improved from 0.72280 to 0.72320, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 37/500
7500/7500 [==============================] - 2s 253us/step - loss: 0.2041 - acc: 0.6649 - val_loss: 0.1989 - val_acc: 0.7212
Epoch 00037: val_acc did not improve from 0.72320
Epoch 38/500
7500/7500 [==============================] - 2s 227us/step - loss: 0.2022 - acc: 0.6648 - val_loss: 0.1971 - val_acc: 0.7224
Epoch 00038: val_acc did not improve from 0.72320
Epoch 39/500
7500/7500 [==============================] - 2s 240us/step - loss: 0.2037 - acc: 0.6533 - val_loss: 0.1962 - val_acc: 0.7212
Epoch 00039: val_acc did not improve from 0.72320
Epoch 40/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1982 - acc: 0.6713 - val_loss: 0.1954 - val_acc: 0.7180
Epoch 00040: val_acc did not improve from 0.72320
Epoch 41/500
7500/7500 [==============================] - 2s 226us/step - loss: 0.2015 - acc: 0.6603 - val_loss: 0.1952 - val_acc: 0.7196
Epoch 00041: val_acc did not improve from 0.72320
Epoch 42/500
7500/7500 [==============================] - 2s 217us/step - loss: 0.2021 - acc: 0.6643 - val_loss: 0.1951 - val_acc: 0.7192
Epoch 00042: val_acc did not improve from 0.72320
Epoch 43/500
7500/7500 [==============================] - 2s 224us/step - loss: 0.2004 - acc: 0.6581 - val_loss: 0.1956 - val_acc: 0.7176
Epoch 00043: val_acc did not improve from 0.72320
Epoch 44/500
7500/7500 [==============================] - 2s 244us/step - loss: 0.2001 - acc: 0.6664 - val_loss: 0.1946 - val_acc: 0.7196
Epoch 00044: val_acc did not improve from 0.72320
Epoch 45/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.2017 - acc: 0.6600 - val_loss: 0.1946 - val_acc: 0.7172
Epoch 00045: val_acc did not improve from 0.72320
Epoch 46/500
7500/7500 [==============================] - 2s 220us/step - loss: 0.2000 - acc: 0.6664 - val_loss: 0.1943 - val_acc: 0.7180
Epoch 00046: val_acc did not improve from 0.72320
Epoch 47/500
7500/7500 [==============================] - 2s 226us/step - loss: 0.2006 - acc: 0.6607 - val_loss: 0.1943 - val_acc: 0.7184
Epoch 00047: val_acc did not improve from 0.72320
Epoch 48/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.2004 - acc: 0.6565 - val_loss: 0.1944 - val_acc: 0.7192
Epoch 00048: val_acc did not improve from 0.72320
Epoch 49/500
7500/7500 [==============================] - 2s 253us/step - loss: 0.1986 - acc: 0.6668 - val_loss: 0.1944 - val_acc: 0.7188
Epoch 00049: val_acc did not improve from 0.72320
Epoch 50/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1977 - acc: 0.6681 - val_loss: 0.1942 - val_acc: 0.7212
Epoch 00050: val_acc did not improve from 0.72320
Epoch 51/500
7500/7500 [==============================] - 2s 295us/step - loss: 0.1985 - acc: 0.6689 - val_loss: 0.1944 - val_acc: 0.7188
Epoch 00051: val_acc did not improve from 0.72320
Epoch 52/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.1985 - acc: 0.6636 - val_loss: 0.1943 - val_acc: 0.7212
Epoch 00052: val_acc did not improve from 0.72320
Epoch 53/500
7500/7500 [==============================] - 2s 254us/step - loss: 0.1988 - acc: 0.6692 - val_loss: 0.1942 - val_acc: 0.7192
Epoch 00053: val_acc did not improve from 0.72320
Epoch 54/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1969 - acc: 0.6717 - val_loss: 0.1941 - val_acc: 0.7176
Epoch 00054: val_acc did not improve from 0.72320
Epoch 55/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1967 - acc: 0.6757 - val_loss: 0.1941 - val_acc: 0.7200
Epoch 00055: val_acc did not improve from 0.72320
Epoch 56/500
7500/7500 [==============================] - 2s 240us/step - loss: 0.1969 - acc: 0.6785 - val_loss: 0.1941 - val_acc: 0.7172
Epoch 00056: val_acc did not improve from 0.72320
Epoch 57/500
7500/7500 [==============================] - 2s 244us/step - loss: 0.1966 - acc: 0.6799 - val_loss: 0.1933 - val_acc: 0.7180
Epoch 00057: val_acc did not improve from 0.72320
Epoch 58/500
7500/7500 [==============================] - 2s 254us/step - loss: 0.1947 - acc: 0.6933 - val_loss: 0.1930 - val_acc: 0.7188
Epoch 00058: val_acc did not improve from 0.72320
Epoch 59/500
7500/7500 [==============================] - 2s 241us/step - loss: 0.1942 - acc: 0.6880 - val_loss: 0.1933 - val_acc: 0.7204
Epoch 00059: val_acc did not improve from 0.72320
Epoch 60/500
7500/7500 [==============================] - 2s 248us/step - loss: 0.1936 - acc: 0.6952 - val_loss: 0.1936 - val_acc: 0.7248
Epoch 00060: val_acc improved from 0.72320 to 0.72480, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5
Epoch 61/500
7500/7500 [==============================] - 2s 255us/step - loss: 0.1913 - acc: 0.7027 - val_loss: 0.1932 - val_acc: 0.7216
Epoch 00061: val_acc did not improve from 0.72480
Epoch 62/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1934 - acc: 0.6943 - val_loss: 0.1934 - val_acc: 0.7164
Epoch 00062: val_acc did not improve from 0.72480
Epoch 63/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1897 - acc: 0.7065 - val_loss: 0.1929 - val_acc: 0.7156
Epoch 00063: val_acc did not improve from 0.72480
Epoch 64/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1911 - acc: 0.7065 - val_loss: 0.1926 - val_acc: 0.7204
Epoch 00064: val_acc did not improve from 0.72480
Epoch 65/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1914 - acc: 0.7072 - val_loss: 0.1925 - val_acc: 0.7208
Epoch 00065: val_acc did not improve from 0.72480
Epoch 66/500
7500/7500 [==============================] - 2s 244us/step - loss: 0.1902 - acc: 0.7104 - val_loss: 0.1932 - val_acc: 0.7176
Epoch 00066: val_acc did not improve from 0.72480
Epoch 67/500
7500/7500 [==============================] - 2s 252us/step - loss: 0.1928 - acc: 0.7051 - val_loss: 0.1937 - val_acc: 0.7196
Epoch 00067: val_acc did not improve from 0.72480
Epoch 68/500
7500/7500 [==============================] - 2s 229us/step - loss: 0.1933 - acc: 0.7027 - val_loss: 0.1930 - val_acc: 0.7208
Epoch 00068: val_acc did not improve from 0.72480
Epoch 69/500
7500/7500 [==============================] - 2s 229us/step - loss: 0.1893 - acc: 0.7131 - val_loss: 0.1931 - val_acc: 0.7196
Epoch 00069: val_acc did not improve from 0.72480
Epoch 70/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1910 - acc: 0.7108 - val_loss: 0.1935 - val_acc: 0.7188
Epoch 00070: val_acc did not improve from 0.72480
Epoch 71/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1881 - acc: 0.7227 - val_loss: 0.1933 - val_acc: 0.7160
Epoch 00071: val_acc did not improve from 0.72480
Epoch 72/500
7500/7500 [==============================] - 2s 248us/step - loss: 0.1895 - acc: 0.7092 - val_loss: 0.1933 - val_acc: 0.7176
Epoch 00072: val_acc did not improve from 0.72480
Epoch 73/500
7500/7500 [==============================] - 2s 227us/step - loss: 0.1893 - acc: 0.7139 - val_loss: 0.1935 - val_acc: 0.7164
Epoch 00073: val_acc did not improve from 0.72480
Epoch 74/500
7500/7500 [==============================] - 2s 247us/step - loss: 0.1877 - acc: 0.7148 - val_loss: 0.1929 - val_acc: 0.7160
Epoch 00074: val_acc did not improve from 0.72480
Epoch 75/500
7500/7500 [==============================] - 2s 246us/step - loss: 0.1902 - acc: 0.7151 - val_loss: 0.1933 - val_acc: 0.7180
Epoch 00075: val_acc did not improve from 0.72480
Epoch 76/500
7500/7500 [==============================] - 2s 223us/step - loss: 0.1899 - acc: 0.7136 - val_loss: 0.1931 - val_acc: 0.7176
Epoch 00076: val_acc did not improve from 0.72480
Epoch 77/500
7500/7500 [==============================] - 2s 253us/step - loss: 0.1880 - acc: 0.7207 - val_loss: 0.1943 - val_acc: 0.7144
Epoch 00077: val_acc did not improve from 0.72480
Epoch 78/500
7500/7500 [==============================] - 2s 245us/step - loss: 0.1894 - acc: 0.7212 - val_loss: 0.1936 - val_acc: 0.7172
Epoch 00078: val_acc did not improve from 0.72480
Epoch 79/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1871 - acc: 0.7185 - val_loss: 0.1936 - val_acc: 0.7156
Epoch 00079: val_acc did not improve from 0.72480
Epoch 80/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1892 - acc: 0.7160 - val_loss: 0.1937 - val_acc: 0.7148
Epoch 00080: val_acc did not improve from 0.72480
Epoch 81/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1876 - acc: 0.7256 - val_loss: 0.1936 - val_acc: 0.7132
Epoch 00081: val_acc did not improve from 0.72480
Epoch 82/500
7500/7500 [==============================] - 2s 227us/step - loss: 0.1879 - acc: 0.7249 - val_loss: 0.1938 - val_acc: 0.7192
Epoch 00082: val_acc did not improve from 0.72480
Epoch 83/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1903 - acc: 0.7188 - val_loss: 0.1939 - val_acc: 0.7172
Epoch 00083: val_acc did not improve from 0.72480
Epoch 84/500
7500/7500 [==============================] - 2s 246us/step - loss: 0.1873 - acc: 0.7216 - val_loss: 0.1935 - val_acc: 0.7160
Epoch 00084: val_acc did not improve from 0.72480
Epoch 85/500
7500/7500 [==============================] - 2s 226us/step - loss: 0.1903 - acc: 0.7247 - val_loss: 0.1937 - val_acc: 0.7200
Epoch 00085: val_acc did not improve from 0.72480
Epoch 86/500
7500/7500 [==============================] - 2s 248us/step - loss: 0.1903 - acc: 0.7229 - val_loss: 0.1938 - val_acc: 0.7168
Epoch 00086: val_acc did not improve from 0.72480
Epoch 87/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1889 - acc: 0.7256 - val_loss: 0.1938 - val_acc: 0.7148
Epoch 00087: val_acc did not improve from 0.72480
Epoch 88/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1884 - acc: 0.7219 - val_loss: 0.1941 - val_acc: 0.7180
Epoch 00088: val_acc did not improve from 0.72480
Epoch 89/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1874 - acc: 0.7259 - val_loss: 0.1940 - val_acc: 0.7156
Epoch 00089: val_acc did not improve from 0.72480
Epoch 90/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1885 - acc: 0.7272 - val_loss: 0.1938 - val_acc: 0.7184
Epoch 00090: val_acc did not improve from 0.72480
Epoch 91/500
7500/7500 [==============================] - 2s 240us/step - loss: 0.1876 - acc: 0.7280 - val_loss: 0.1939 - val_acc: 0.7192
Epoch 00091: val_acc did not improve from 0.72480
Epoch 92/500
7500/7500 [==============================] - 2s 229us/step - loss: 0.1865 - acc: 0.7309 - val_loss: 0.1936 - val_acc: 0.7204
Epoch 00092: val_acc did not improve from 0.72480
Epoch 93/500
7500/7500 [==============================] - 2s 221us/step - loss: 0.1850 - acc: 0.7353 - val_loss: 0.1939 - val_acc: 0.7120
Epoch 00093: val_acc did not improve from 0.72480
Epoch 94/500
7500/7500 [==============================] - 2s 219us/step - loss: 0.1878 - acc: 0.7281 - val_loss: 0.1938 - val_acc: 0.7172
Epoch 00094: val_acc did not improve from 0.72480
Epoch 95/500
7500/7500 [==============================] - 2s 220us/step - loss: 0.1863 - acc: 0.7312 - val_loss: 0.1938 - val_acc: 0.7164
Epoch 00095: val_acc did not improve from 0.72480
Epoch 96/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1889 - acc: 0.7267 - val_loss: 0.1938 - val_acc: 0.7176
Epoch 00096: val_acc did not improve from 0.72480
Epoch 97/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1847 - acc: 0.7335 - val_loss: 0.1937 - val_acc: 0.7160
Epoch 00097: val_acc did not improve from 0.72480
Epoch 98/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.1850 - acc: 0.7331 - val_loss: 0.1938 - val_acc: 0.7204
Epoch 00098: val_acc did not improve from 0.72480
Epoch 99/500
7500/7500 [==============================] - 2s 238us/step - loss: 0.1850 - acc: 0.7289 - val_loss: 0.1939 - val_acc: 0.7212
Epoch 00099: val_acc did not improve from 0.72480
Epoch 100/500
7500/7500 [==============================] - 2s 238us/step - loss: 0.1850 - acc: 0.7371 - val_loss: 0.1937 - val_acc: 0.7184
Epoch 00100: val_acc did not improve from 0.72480
Epoch 101/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1860 - acc: 0.7321 - val_loss: 0.1940 - val_acc: 0.7192
Epoch 00101: val_acc did not improve from 0.72480
Epoch 102/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1858 - acc: 0.7388 - val_loss: 0.1940 - val_acc: 0.7156
Epoch 00102: val_acc did not improve from 0.72480
Epoch 103/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1847 - acc: 0.7345 - val_loss: 0.1942 - val_acc: 0.7184
Epoch 00103: val_acc did not improve from 0.72480
Epoch 104/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.1842 - acc: 0.7397 - val_loss: 0.1942 - val_acc: 0.7184
Epoch 00104: val_acc did not improve from 0.72480
Epoch 105/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1843 - acc: 0.7343 - val_loss: 0.1944 - val_acc: 0.7184
Epoch 00105: val_acc did not improve from 0.72480
Epoch 106/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1852 - acc: 0.7363 - val_loss: 0.1942 - val_acc: 0.7148
Epoch 00106: val_acc did not improve from 0.72480
Epoch 107/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1831 - acc: 0.7353 - val_loss: 0.1950 - val_acc: 0.7192
Epoch 00107: val_acc did not improve from 0.72480
Epoch 108/500
7500/7500 [==============================] - 2s 232us/step - loss: 0.1845 - acc: 0.7384 - val_loss: 0.1943 - val_acc: 0.7168
Epoch 00108: val_acc did not improve from 0.72480
Epoch 109/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.1855 - acc: 0.7385 - val_loss: 0.1945 - val_acc: 0.7192
Epoch 00109: val_acc did not improve from 0.72480
Epoch 110/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1833 - acc: 0.7376 - val_loss: 0.1943 - val_acc: 0.7176
Epoch 00110: val_acc did not improve from 0.72480
Epoch 111/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1848 - acc: 0.7385 - val_loss: 0.1950 - val_acc: 0.7188
Epoch 00111: val_acc did not improve from 0.72480
Epoch 112/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1842 - acc: 0.7401 - val_loss: 0.1947 - val_acc: 0.7168
Epoch 00112: val_acc did not improve from 0.72480
Epoch 113/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1839 - acc: 0.7416 - val_loss: 0.1948 - val_acc: 0.7176
Epoch 00113: val_acc did not improve from 0.72480
Epoch 114/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1832 - acc: 0.7441 - val_loss: 0.1947 - val_acc: 0.7176
Epoch 00114: val_acc did not improve from 0.72480
Epoch 115/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1850 - acc: 0.7363 - val_loss: 0.1947 - val_acc: 0.7180
Epoch 00115: val_acc did not improve from 0.72480
Epoch 116/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1837 - acc: 0.7436 - val_loss: 0.1948 - val_acc: 0.7184
Epoch 00116: val_acc did not improve from 0.72480
Epoch 117/500
7500/7500 [==============================] - 2s 245us/step - loss: 0.1838 - acc: 0.7408 - val_loss: 0.1950 - val_acc: 0.7184
Epoch 00117: val_acc did not improve from 0.72480
Epoch 118/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1823 - acc: 0.7420 - val_loss: 0.1949 - val_acc: 0.7176
Epoch 00118: val_acc did not improve from 0.72480
Epoch 119/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1815 - acc: 0.7457 - val_loss: 0.1953 - val_acc: 0.7172
Epoch 00119: val_acc did not improve from 0.72480
Epoch 120/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1830 - acc: 0.7444 - val_loss: 0.1950 - val_acc: 0.7176
Epoch 00120: val_acc did not improve from 0.72480
Epoch 121/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1820 - acc: 0.7460 - val_loss: 0.1956 - val_acc: 0.7188
Epoch 00121: val_acc did not improve from 0.72480
Epoch 122/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1831 - acc: 0.7445 - val_loss: 0.1956 - val_acc: 0.7188
Epoch 00122: val_acc did not improve from 0.72480
Epoch 123/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1834 - acc: 0.7404 - val_loss: 0.1952 - val_acc: 0.7212
Epoch 00123: val_acc did not improve from 0.72480
Epoch 124/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1840 - acc: 0.7419 - val_loss: 0.1958 - val_acc: 0.7220
Epoch 00124: val_acc did not improve from 0.72480
Epoch 125/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1808 - acc: 0.7491 - val_loss: 0.1961 - val_acc: 0.7216
Epoch 00125: val_acc did not improve from 0.72480
Epoch 126/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1805 - acc: 0.7465 - val_loss: 0.1959 - val_acc: 0.7220
Epoch 00126: val_acc did not improve from 0.72480
Epoch 127/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.1829 - acc: 0.7463 - val_loss: 0.1952 - val_acc: 0.7216
Epoch 00127: val_acc did not improve from 0.72480
Epoch 128/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1815 - acc: 0.7469 - val_loss: 0.1957 - val_acc: 0.7212
Epoch 00128: val_acc did not improve from 0.72480
Epoch 129/500
7500/7500 [==============================] - 2s 232us/step - loss: 0.1808 - acc: 0.7471 - val_loss: 0.1960 - val_acc: 0.7196
Epoch 00129: val_acc did not improve from 0.72480
Epoch 130/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1815 - acc: 0.7443 - val_loss: 0.1964 - val_acc: 0.7212
Epoch 00130: val_acc did not improve from 0.72480
Epoch 131/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1803 - acc: 0.7536 - val_loss: 0.1970 - val_acc: 0.7208
Epoch 00131: val_acc did not improve from 0.72480
Epoch 132/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1807 - acc: 0.7520 - val_loss: 0.1968 - val_acc: 0.7216
Epoch 00132: val_acc did not improve from 0.72480
Epoch 133/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1815 - acc: 0.7456 - val_loss: 0.1976 - val_acc: 0.7172
Epoch 00133: val_acc did not improve from 0.72480
Epoch 134/500
7500/7500 [==============================] - 2s 249us/step - loss: 0.1824 - acc: 0.7455 - val_loss: 0.1961 - val_acc: 0.7200
Epoch 00134: val_acc did not improve from 0.72480
Epoch 135/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1798 - acc: 0.7511 - val_loss: 0.1962 - val_acc: 0.7220
Epoch 00135: val_acc did not improve from 0.72480
Epoch 136/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1804 - acc: 0.7505 - val_loss: 0.1966 - val_acc: 0.7208
Epoch 00136: val_acc did not improve from 0.72480
Epoch 137/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1807 - acc: 0.7475 - val_loss: 0.1970 - val_acc: 0.7192
Epoch 00137: val_acc did not improve from 0.72480
Epoch 138/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1796 - acc: 0.7529 - val_loss: 0.1987 - val_acc: 0.7216
Epoch 00138: val_acc did not improve from 0.72480
Epoch 139/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1810 - acc: 0.7476 - val_loss: 0.1981 - val_acc: 0.7204
Epoch 00139: val_acc did not improve from 0.72480
Epoch 140/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1807 - acc: 0.7505 - val_loss: 0.1982 - val_acc: 0.7196
Epoch 00140: val_acc did not improve from 0.72480
Epoch 141/500
7500/7500 [==============================] - 2s 236us/step - loss: 0.1783 - acc: 0.7600 - val_loss: 0.1978 - val_acc: 0.7168
Epoch 00141: val_acc did not improve from 0.72480
Epoch 142/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1788 - acc: 0.7532 - val_loss: 0.1982 - val_acc: 0.7168
Epoch 00142: val_acc did not improve from 0.72480
Epoch 143/500
7500/7500 [==============================] - 2s 238us/step - loss: 0.1793 - acc: 0.7540 - val_loss: 0.1992 - val_acc: 0.7176
Epoch 00143: val_acc did not improve from 0.72480
Epoch 144/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1785 - acc: 0.7532 - val_loss: 0.1987 - val_acc: 0.7168
Epoch 00144: val_acc did not improve from 0.72480
Epoch 145/500
7500/7500 [==============================] - 2s 242us/step - loss: 0.1814 - acc: 0.7477 - val_loss: 0.1981 - val_acc: 0.7192
Epoch 00145: val_acc did not improve from 0.72480
Epoch 146/500
7500/7500 [==============================] - 2s 249us/step - loss: 0.1769 - acc: 0.7605 - val_loss: 0.1989 - val_acc: 0.7192
Epoch 00146: val_acc did not improve from 0.72480
Epoch 147/500
7500/7500 [==============================] - 2s 238us/step - loss: 0.1777 - acc: 0.7547 - val_loss: 0.1992 - val_acc: 0.7172
Epoch 00147: val_acc did not improve from 0.72480
Epoch 148/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1806 - acc: 0.7521 - val_loss: 0.1994 - val_acc: 0.7168
Epoch 00148: val_acc did not improve from 0.72480
Epoch 149/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1777 - acc: 0.7611 - val_loss: 0.1997 - val_acc: 0.7192
Epoch 00149: val_acc did not improve from 0.72480
Epoch 150/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1760 - acc: 0.7571 - val_loss: 0.1998 - val_acc: 0.7156
Epoch 00150: val_acc did not improve from 0.72480
Epoch 151/500
7500/7500 [==============================] - 2s 238us/step - loss: 0.1777 - acc: 0.7573 - val_loss: 0.2003 - val_acc: 0.7188
Epoch 00151: val_acc did not improve from 0.72480
Epoch 152/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1774 - acc: 0.7567 - val_loss: 0.2008 - val_acc: 0.7176
Epoch 00152: val_acc did not improve from 0.72480
Epoch 153/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1778 - acc: 0.7552 - val_loss: 0.2027 - val_acc: 0.7160
Epoch 00153: val_acc did not improve from 0.72480
Epoch 154/500
7500/7500 [==============================] - 2s 241us/step - loss: 0.1773 - acc: 0.7545 - val_loss: 0.2007 - val_acc: 0.7160
Epoch 00154: val_acc did not improve from 0.72480
Epoch 155/500
7500/7500 [==============================] - 2s 240us/step - loss: 0.1754 - acc: 0.7615 - val_loss: 0.2003 - val_acc: 0.7180
Epoch 00155: val_acc did not improve from 0.72480
Epoch 156/500
7500/7500 [==============================] - 2s 241us/step - loss: 0.1768 - acc: 0.7595 - val_loss: 0.2037 - val_acc: 0.7156
Epoch 00156: val_acc did not improve from 0.72480
Epoch 157/500
7500/7500 [==============================] - 2s 237us/step - loss: 0.1763 - acc: 0.7585 - val_loss: 0.2012 - val_acc: 0.7160
Epoch 00157: val_acc did not improve from 0.72480
Epoch 158/500
7500/7500 [==============================] - 2s 246us/step - loss: 0.1766 - acc: 0.7556 - val_loss: 0.2003 - val_acc: 0.7136
Epoch 00158: val_acc did not improve from 0.72480
Epoch 159/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1757 - acc: 0.7575 - val_loss: 0.2028 - val_acc: 0.7140
Epoch 00159: val_acc did not improve from 0.72480
Epoch 160/500
7500/7500 [==============================] - 2s 239us/step - loss: 0.1768 - acc: 0.7580 - val_loss: 0.2027 - val_acc: 0.7132
Epoch 00160: val_acc did not improve from 0.72480
Epoch 161/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1765 - acc: 0.7599 - val_loss: 0.2018 - val_acc: 0.7160
Epoch 00161: val_acc did not improve from 0.72480
Epoch 162/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1769 - acc: 0.7561 - val_loss: 0.2022 - val_acc: 0.7160
Epoch 00162: val_acc did not improve from 0.72480
Epoch 163/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1774 - acc: 0.7541 - val_loss: 0.2026 - val_acc: 0.7140
Epoch 00163: val_acc did not improve from 0.72480
Epoch 164/500
7500/7500 [==============================] - 2s 287us/step - loss: 0.1776 - acc: 0.7541 - val_loss: 0.2013 - val_acc: 0.7136
Epoch 00164: val_acc did not improve from 0.72480
Epoch 165/500
7500/7500 [==============================] - 3s 337us/step - loss: 0.1744 - acc: 0.7643 - val_loss: 0.2012 - val_acc: 0.7156
Epoch 00165: val_acc did not improve from 0.72480
Epoch 166/500
7500/7500 [==============================] - 3s 349us/step - loss: 0.1747 - acc: 0.7597 - val_loss: 0.2051 - val_acc: 0.7128
Epoch 00166: val_acc did not improve from 0.72480
Epoch 167/500
7500/7500 [==============================] - 2s 324us/step - loss: 0.1758 - acc: 0.7597 - val_loss: 0.2035 - val_acc: 0.7152
Epoch 00167: val_acc did not improve from 0.72480
Epoch 168/500
7500/7500 [==============================] - 2s 315us/step - loss: 0.1764 - acc: 0.7553 - val_loss: 0.2025 - val_acc: 0.7156
Epoch 00168: val_acc did not improve from 0.72480
Epoch 169/500
7500/7500 [==============================] - 2s 309us/step - loss: 0.1754 - acc: 0.7640 - val_loss: 0.2034 - val_acc: 0.7148
Epoch 00169: val_acc did not improve from 0.72480
Epoch 170/500
7500/7500 [==============================] - 2s 308us/step - loss: 0.1765 - acc: 0.7592 - val_loss: 0.2025 - val_acc: 0.7136
Epoch 00170: val_acc did not improve from 0.72480
Epoch 171/500
7500/7500 [==============================] - 2s 299us/step - loss: 0.1743 - acc: 0.7621 - val_loss: 0.2032 - val_acc: 0.7124
Epoch 00171: val_acc did not improve from 0.72480
Epoch 172/500
7500/7500 [==============================] - 3s 354us/step - loss: 0.1747 - acc: 0.7601 - val_loss: 0.2030 - val_acc: 0.7132
Epoch 00172: val_acc did not improve from 0.72480
Epoch 173/500
7500/7500 [==============================] - 2s 301us/step - loss: 0.1748 - acc: 0.7604 - val_loss: 0.2027 - val_acc: 0.7140
Epoch 00173: val_acc did not improve from 0.72480
Epoch 174/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1757 - acc: 0.7569 - val_loss: 0.2042 - val_acc: 0.7136
Epoch 00174: val_acc did not improve from 0.72480
Epoch 175/500
7500/7500 [==============================] - 2s 262us/step - loss: 0.1759 - acc: 0.7617 - val_loss: 0.2050 - val_acc: 0.7132
Epoch 00175: val_acc did not improve from 0.72480
Epoch 176/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1744 - acc: 0.7601 - val_loss: 0.2050 - val_acc: 0.7148
Epoch 00176: val_acc did not improve from 0.72480
Epoch 177/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1739 - acc: 0.7637 - val_loss: 0.2057 - val_acc: 0.7140
Epoch 00177: val_acc did not improve from 0.72480
Epoch 178/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1733 - acc: 0.7645 - val_loss: 0.2036 - val_acc: 0.7144
Epoch 00178: val_acc did not improve from 0.72480
Epoch 179/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1736 - acc: 0.7607 - val_loss: 0.2060 - val_acc: 0.7140
Epoch 00179: val_acc did not improve from 0.72480
Epoch 180/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1752 - acc: 0.7585 - val_loss: 0.2038 - val_acc: 0.7148
Epoch 00180: val_acc did not improve from 0.72480
Epoch 181/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1744 - acc: 0.7589 - val_loss: 0.2064 - val_acc: 0.7144
Epoch 00181: val_acc did not improve from 0.72480
Epoch 182/500
7500/7500 [==============================] - 2s 257us/step - loss: 0.1741 - acc: 0.7612 - val_loss: 0.2051 - val_acc: 0.7128
Epoch 00182: val_acc did not improve from 0.72480
Epoch 183/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1737 - acc: 0.7605 - val_loss: 0.2048 - val_acc: 0.7140
Epoch 00183: val_acc did not improve from 0.72480
Epoch 184/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1746 - acc: 0.7625 - val_loss: 0.2066 - val_acc: 0.7128
Epoch 00184: val_acc did not improve from 0.72480
Epoch 185/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1752 - acc: 0.7616 - val_loss: 0.2050 - val_acc: 0.7152
Epoch 00185: val_acc did not improve from 0.72480
Epoch 186/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1754 - acc: 0.7552 - val_loss: 0.2051 - val_acc: 0.7132
Epoch 00186: val_acc did not improve from 0.72480
Epoch 187/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1753 - acc: 0.7595 - val_loss: 0.2087 - val_acc: 0.7164
Epoch 00187: val_acc did not improve from 0.72480
Epoch 188/500
7500/7500 [==============================] - 2s 293us/step - loss: 0.1735 - acc: 0.7637 - val_loss: 0.2080 - val_acc: 0.7096
Epoch 00188: val_acc did not improve from 0.72480
Epoch 189/500
7500/7500 [==============================] - 2s 283us/step - loss: 0.1741 - acc: 0.7628 - val_loss: 0.2067 - val_acc: 0.7140
Epoch 00189: val_acc did not improve from 0.72480
Epoch 190/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1730 - acc: 0.7643 - val_loss: 0.2043 - val_acc: 0.7152
Epoch 00190: val_acc did not improve from 0.72480
Epoch 191/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1738 - acc: 0.7624 - val_loss: 0.2080 - val_acc: 0.7120
Epoch 00191: val_acc did not improve from 0.72480
Epoch 192/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1733 - acc: 0.7631 - val_loss: 0.2078 - val_acc: 0.7144
Epoch 00192: val_acc did not improve from 0.72480
Epoch 193/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1733 - acc: 0.7645 - val_loss: 0.2062 - val_acc: 0.7144
Epoch 00193: val_acc did not improve from 0.72480
Epoch 194/500
7500/7500 [==============================] - 2s 262us/step - loss: 0.1744 - acc: 0.7600 - val_loss: 0.2067 - val_acc: 0.7144
Epoch 00194: val_acc did not improve from 0.72480
Epoch 195/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1735 - acc: 0.7625 - val_loss: 0.2086 - val_acc: 0.7116
Epoch 00195: val_acc did not improve from 0.72480
Epoch 196/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1739 - acc: 0.7621 - val_loss: 0.2079 - val_acc: 0.7148
Epoch 00196: val_acc did not improve from 0.72480
Epoch 197/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1754 - acc: 0.7564 - val_loss: 0.2091 - val_acc: 0.7128
Epoch 00197: val_acc did not improve from 0.72480
Epoch 198/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1747 - acc: 0.7608 - val_loss: 0.2112 - val_acc: 0.7136
Epoch 00198: val_acc did not improve from 0.72480
Epoch 199/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1742 - acc: 0.7597 - val_loss: 0.2091 - val_acc: 0.7140
Epoch 00199: val_acc did not improve from 0.72480
Epoch 200/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1721 - acc: 0.7679 - val_loss: 0.2092 - val_acc: 0.7128
Epoch 00200: val_acc did not improve from 0.72480
Epoch 201/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1718 - acc: 0.7665 - val_loss: 0.2115 - val_acc: 0.7152
Epoch 00201: val_acc did not improve from 0.72480
Epoch 202/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1720 - acc: 0.7687 - val_loss: 0.2114 - val_acc: 0.7120
Epoch 00202: val_acc did not improve from 0.72480
Epoch 203/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1728 - acc: 0.7639 - val_loss: 0.2105 - val_acc: 0.7148
Epoch 00203: val_acc did not improve from 0.72480
Epoch 204/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1718 - acc: 0.7699 - val_loss: 0.2104 - val_acc: 0.7124
Epoch 00204: val_acc did not improve from 0.72480
Epoch 205/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1715 - acc: 0.7644 - val_loss: 0.2114 - val_acc: 0.7096
Epoch 00205: val_acc did not improve from 0.72480
Epoch 206/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1723 - acc: 0.7641 - val_loss: 0.2124 - val_acc: 0.7112
Epoch 00206: val_acc did not improve from 0.72480
Epoch 207/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1727 - acc: 0.7641 - val_loss: 0.2089 - val_acc: 0.7108
Epoch 00207: val_acc did not improve from 0.72480
Epoch 208/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1731 - acc: 0.7605 - val_loss: 0.2121 - val_acc: 0.7104
Epoch 00208: val_acc did not improve from 0.72480
Epoch 209/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1732 - acc: 0.7632 - val_loss: 0.2115 - val_acc: 0.7120
Epoch 00209: val_acc did not improve from 0.72480
Epoch 210/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1720 - acc: 0.7649 - val_loss: 0.2105 - val_acc: 0.7100
Epoch 00210: val_acc did not improve from 0.72480
Epoch 211/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.1729 - acc: 0.7616 - val_loss: 0.2106 - val_acc: 0.7116
Epoch 00211: val_acc did not improve from 0.72480
Epoch 212/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1712 - acc: 0.7683 - val_loss: 0.2113 - val_acc: 0.7116
Epoch 00212: val_acc did not improve from 0.72480
Epoch 213/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1722 - acc: 0.7632 - val_loss: 0.2122 - val_acc: 0.7100
Epoch 00213: val_acc did not improve from 0.72480
Epoch 214/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.1735 - acc: 0.7639 - val_loss: 0.2133 - val_acc: 0.7100
Epoch 00214: val_acc did not improve from 0.72480
Epoch 215/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1701 - acc: 0.7695 - val_loss: 0.2136 - val_acc: 0.7100
Epoch 00215: val_acc did not improve from 0.72480
Epoch 216/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1702 - acc: 0.7693 - val_loss: 0.2134 - val_acc: 0.7112
Epoch 00216: val_acc did not improve from 0.72480
Epoch 217/500
7500/7500 [==============================] - 2s 294us/step - loss: 0.1723 - acc: 0.7621 - val_loss: 0.2107 - val_acc: 0.7108
Epoch 00217: val_acc did not improve from 0.72480
Epoch 218/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.1683 - acc: 0.7721 - val_loss: 0.2131 - val_acc: 0.7116
Epoch 00218: val_acc did not improve from 0.72480
Epoch 219/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.1702 - acc: 0.7721 - val_loss: 0.2147 - val_acc: 0.7104
Epoch 00219: val_acc did not improve from 0.72480
Epoch 220/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1719 - acc: 0.7643 - val_loss: 0.2107 - val_acc: 0.7124
Epoch 00220: val_acc did not improve from 0.72480
Epoch 221/500
7500/7500 [==============================] - 2s 279us/step - loss: 0.1689 - acc: 0.7695 - val_loss: 0.2150 - val_acc: 0.7088
Epoch 00221: val_acc did not improve from 0.72480
Epoch 222/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.1691 - acc: 0.7700 - val_loss: 0.2114 - val_acc: 0.7104
Epoch 00222: val_acc did not improve from 0.72480
Epoch 223/500
7500/7500 [==============================] - 2s 286us/step - loss: 0.1707 - acc: 0.7697 - val_loss: 0.2143 - val_acc: 0.7088
Epoch 00223: val_acc did not improve from 0.72480
Epoch 224/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1719 - acc: 0.7687 - val_loss: 0.2130 - val_acc: 0.7088
Epoch 00224: val_acc did not improve from 0.72480
Epoch 225/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1692 - acc: 0.7720 - val_loss: 0.2135 - val_acc: 0.7104
Epoch 00225: val_acc did not improve from 0.72480
Epoch 226/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1705 - acc: 0.7712 - val_loss: 0.2150 - val_acc: 0.7112
Epoch 00226: val_acc did not improve from 0.72480
Epoch 227/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1734 - acc: 0.7601 - val_loss: 0.2151 - val_acc: 0.7092
Epoch 00227: val_acc did not improve from 0.72480
Epoch 228/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1692 - acc: 0.7703 - val_loss: 0.2148 - val_acc: 0.7104
Epoch 00228: val_acc did not improve from 0.72480
Epoch 229/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1714 - acc: 0.7660 - val_loss: 0.2154 - val_acc: 0.7104
Epoch 00229: val_acc did not improve from 0.72480
Epoch 230/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1692 - acc: 0.7709 - val_loss: 0.2140 - val_acc: 0.7108
Epoch 00230: val_acc did not improve from 0.72480
Epoch 231/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1714 - acc: 0.7623 - val_loss: 0.2153 - val_acc: 0.7100
Epoch 00231: val_acc did not improve from 0.72480
Epoch 232/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1696 - acc: 0.7709 - val_loss: 0.2135 - val_acc: 0.7112
Epoch 00232: val_acc did not improve from 0.72480
Epoch 233/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1699 - acc: 0.7700 - val_loss: 0.2156 - val_acc: 0.7116
Epoch 00233: val_acc did not improve from 0.72480
Epoch 234/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1718 - acc: 0.7651 - val_loss: 0.2125 - val_acc: 0.7108
Epoch 00234: val_acc did not improve from 0.72480
Epoch 235/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1690 - acc: 0.7729 - val_loss: 0.2138 - val_acc: 0.7092
Epoch 00235: val_acc did not improve from 0.72480
Epoch 236/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1686 - acc: 0.7693 - val_loss: 0.2156 - val_acc: 0.7100
Epoch 00236: val_acc did not improve from 0.72480
Epoch 237/500
7500/7500 [==============================] - 2s 262us/step - loss: 0.1700 - acc: 0.7676 - val_loss: 0.2146 - val_acc: 0.7124
Epoch 00237: val_acc did not improve from 0.72480
Epoch 238/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1695 - acc: 0.7671 - val_loss: 0.2155 - val_acc: 0.7116
Epoch 00238: val_acc did not improve from 0.72480
Epoch 239/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1709 - acc: 0.7668 - val_loss: 0.2150 - val_acc: 0.7120
Epoch 00239: val_acc did not improve from 0.72480
Epoch 240/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1704 - acc: 0.7644 - val_loss: 0.2189 - val_acc: 0.7108
Epoch 00240: val_acc did not improve from 0.72480
Epoch 241/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1671 - acc: 0.7751 - val_loss: 0.2176 - val_acc: 0.7104
Epoch 00241: val_acc did not improve from 0.72480
Epoch 242/500
7500/7500 [==============================] - 2s 257us/step - loss: 0.1691 - acc: 0.7713 - val_loss: 0.2183 - val_acc: 0.7108
Epoch 00242: val_acc did not improve from 0.72480
Epoch 243/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1701 - acc: 0.7687 - val_loss: 0.2158 - val_acc: 0.7108
Epoch 00243: val_acc did not improve from 0.72480
Epoch 244/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1693 - acc: 0.7692 - val_loss: 0.2189 - val_acc: 0.7096
Epoch 00244: val_acc did not improve from 0.72480
Epoch 245/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.1694 - acc: 0.7689 - val_loss: 0.2200 - val_acc: 0.7116
Epoch 00245: val_acc did not improve from 0.72480
Epoch 246/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1695 - acc: 0.7712 - val_loss: 0.2196 - val_acc: 0.7116
Epoch 00246: val_acc did not improve from 0.72480
Epoch 247/500
7500/7500 [==============================] - 2s 261us/step - loss: 0.1673 - acc: 0.7756 - val_loss: 0.2188 - val_acc: 0.7104
Epoch 00247: val_acc did not improve from 0.72480
Epoch 248/500
7500/7500 [==============================] - 2s 263us/step - loss: 0.1695 - acc: 0.7664 - val_loss: 0.2212 - val_acc: 0.7132
Epoch 00248: val_acc did not improve from 0.72480
Epoch 249/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1675 - acc: 0.7735 - val_loss: 0.2184 - val_acc: 0.7104
Epoch 00249: val_acc did not improve from 0.72480
Epoch 250/500
7500/7500 [==============================] - 2s 303us/step - loss: 0.1704 - acc: 0.7656 - val_loss: 0.2201 - val_acc: 0.7104
Epoch 00250: val_acc did not improve from 0.72480
Epoch 251/500
7500/7500 [==============================] - 2s 309us/step - loss: 0.1691 - acc: 0.7671 - val_loss: 0.2219 - val_acc: 0.7076
Epoch 00251: val_acc did not improve from 0.72480
Epoch 252/500
7500/7500 [==============================] - 2s 286us/step - loss: 0.1686 - acc: 0.7703 - val_loss: 0.2195 - val_acc: 0.7092
Epoch 00252: val_acc did not improve from 0.72480
Epoch 253/500
7500/7500 [==============================] - 2s 301us/step - loss: 0.1690 - acc: 0.7688 - val_loss: 0.2217 - val_acc: 0.7104
Epoch 00253: val_acc did not improve from 0.72480
Epoch 254/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1685 - acc: 0.7713 - val_loss: 0.2187 - val_acc: 0.7116
Epoch 00254: val_acc did not improve from 0.72480
Epoch 255/500
7500/7500 [==============================] - 2s 286us/step - loss: 0.1691 - acc: 0.7696 - val_loss: 0.2205 - val_acc: 0.7112
Epoch 00255: val_acc did not improve from 0.72480
Epoch 256/500
7500/7500 [==============================] - 2s 282us/step - loss: 0.1677 - acc: 0.7681 - val_loss: 0.2229 - val_acc: 0.7100
Epoch 00256: val_acc did not improve from 0.72480
Epoch 257/500
7500/7500 [==============================] - 2s 311us/step - loss: 0.1688 - acc: 0.7664 - val_loss: 0.2241 - val_acc: 0.7104
Epoch 00257: val_acc did not improve from 0.72480
Epoch 258/500
7500/7500 [==============================] - 2s 300us/step - loss: 0.1675 - acc: 0.7701 - val_loss: 0.2216 - val_acc: 0.7104
Epoch 00258: val_acc did not improve from 0.72480
Epoch 259/500
7500/7500 [==============================] - 2s 303us/step - loss: 0.1662 - acc: 0.7744 - val_loss: 0.2250 - val_acc: 0.7092
Epoch 00259: val_acc did not improve from 0.72480
Epoch 260/500
7500/7500 [==============================] - 2s 314us/step - loss: 0.1662 - acc: 0.7720 - val_loss: 0.2209 - val_acc: 0.7088
Epoch 00260: val_acc did not improve from 0.72480
Epoch 261/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1663 - acc: 0.7763 - val_loss: 0.2264 - val_acc: 0.7100
Epoch 00261: val_acc did not improve from 0.72480
Epoch 262/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1655 - acc: 0.7741 - val_loss: 0.2244 - val_acc: 0.7116
Epoch 00262: val_acc did not improve from 0.72480
Epoch 263/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1660 - acc: 0.7708 - val_loss: 0.2235 - val_acc: 0.7124
Epoch 00263: val_acc did not improve from 0.72480
Epoch 264/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1678 - acc: 0.7691 - val_loss: 0.2274 - val_acc: 0.7104
Epoch 00264: val_acc did not improve from 0.72480
Epoch 265/500
7500/7500 [==============================] - 2s 260us/step - loss: 0.1688 - acc: 0.7687 - val_loss: 0.2266 - val_acc: 0.7096
Epoch 00265: val_acc did not improve from 0.72480
Epoch 266/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1674 - acc: 0.7687 - val_loss: 0.2281 - val_acc: 0.7108
Epoch 00266: val_acc did not improve from 0.72480
Epoch 267/500
7500/7500 [==============================] - 2s 257us/step - loss: 0.1682 - acc: 0.7669 - val_loss: 0.2255 - val_acc: 0.7116
Epoch 00267: val_acc did not improve from 0.72480
Epoch 268/500
7500/7500 [==============================] - 2s 255us/step - loss: 0.1677 - acc: 0.7688 - val_loss: 0.2261 - val_acc: 0.7116
Epoch 00268: val_acc did not improve from 0.72480
Epoch 269/500
7500/7500 [==============================] - 2s 264us/step - loss: 0.1681 - acc: 0.7645 - val_loss: 0.2271 - val_acc: 0.7088
Epoch 00269: val_acc did not improve from 0.72480
Epoch 270/500
7500/7500 [==============================] - 2s 259us/step - loss: 0.1674 - acc: 0.7663 - val_loss: 0.2290 - val_acc: 0.7096
Epoch 00270: val_acc did not improve from 0.72480
Epoch 271/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1667 - acc: 0.7701 - val_loss: 0.2266 - val_acc: 0.7104
Epoch 00271: val_acc did not improve from 0.72480
Epoch 272/500
7500/7500 [==============================] - 2s 262us/step - loss: 0.1673 - acc: 0.7709 - val_loss: 0.2279 - val_acc: 0.7096
Epoch 00272: val_acc did not improve from 0.72480
Epoch 273/500
7500/7500 [==============================] - 2s 258us/step - loss: 0.1648 - acc: 0.7728 - val_loss: 0.2300 - val_acc: 0.7096
Epoch 00273: val_acc did not improve from 0.72480
Epoch 274/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.1659 - acc: 0.7735 - val_loss: 0.2306 - val_acc: 0.7120
Epoch 00274: val_acc did not improve from 0.72480
Epoch 275/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1668 - acc: 0.7723 - val_loss: 0.2281 - val_acc: 0.7104
Epoch 00275: val_acc did not improve from 0.72480
Epoch 276/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1657 - acc: 0.7711 - val_loss: 0.2280 - val_acc: 0.7092
Epoch 00276: val_acc did not improve from 0.72480
Epoch 277/500
7500/7500 [==============================] - 2s 278us/step - loss: 0.1679 - acc: 0.7681 - val_loss: 0.2300 - val_acc: 0.7112
Epoch 00277: val_acc did not improve from 0.72480
Epoch 278/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1678 - acc: 0.7677 - val_loss: 0.2329 - val_acc: 0.7112
Epoch 00278: val_acc did not improve from 0.72480
Epoch 279/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1695 - acc: 0.7645 - val_loss: 0.2293 - val_acc: 0.7124
Epoch 00279: val_acc did not improve from 0.72480
Epoch 280/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1672 - acc: 0.7673 - val_loss: 0.2317 - val_acc: 0.7108
Epoch 00280: val_acc did not improve from 0.72480
Epoch 281/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1667 - acc: 0.7689 - val_loss: 0.2305 - val_acc: 0.7108
Epoch 00281: val_acc did not improve from 0.72480
Epoch 282/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1663 - acc: 0.7707 - val_loss: 0.2334 - val_acc: 0.7100
Epoch 00282: val_acc did not improve from 0.72480
Epoch 283/500
7500/7500 [==============================] - 2s 288us/step - loss: 0.1670 - acc: 0.7684 - val_loss: 0.2296 - val_acc: 0.7128
Epoch 00283: val_acc did not improve from 0.72480
Epoch 284/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1673 - acc: 0.7653 - val_loss: 0.2317 - val_acc: 0.7120
Epoch 00284: val_acc did not improve from 0.72480
Epoch 285/500
7500/7500 [==============================] - 2s 284us/step - loss: 0.1654 - acc: 0.7715 - val_loss: 0.2318 - val_acc: 0.7116
Epoch 00285: val_acc did not improve from 0.72480
Epoch 286/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1660 - acc: 0.7711 - val_loss: 0.2337 - val_acc: 0.7108
Epoch 00286: val_acc did not improve from 0.72480
Epoch 287/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1667 - acc: 0.7709 - val_loss: 0.2330 - val_acc: 0.7116
Epoch 00287: val_acc did not improve from 0.72480
Epoch 288/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1660 - acc: 0.7680 - val_loss: 0.2299 - val_acc: 0.7092
Epoch 00288: val_acc did not improve from 0.72480
Epoch 289/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1655 - acc: 0.7751 - val_loss: 0.2339 - val_acc: 0.7124
Epoch 00289: val_acc did not improve from 0.72480
Epoch 290/500
7500/7500 [==============================] - 2s 229us/step - loss: 0.1657 - acc: 0.7704 - val_loss: 0.2323 - val_acc: 0.7120
Epoch 00290: val_acc did not improve from 0.72480
Epoch 291/500
7500/7500 [==============================] - 2s 234us/step - loss: 0.1661 - acc: 0.7687 - val_loss: 0.2335 - val_acc: 0.7112
Epoch 00291: val_acc did not improve from 0.72480
Epoch 292/500
7500/7500 [==============================] - 2s 228us/step - loss: 0.1654 - acc: 0.7707 - val_loss: 0.2324 - val_acc: 0.7100
Epoch 00292: val_acc did not improve from 0.72480
Epoch 293/500
7500/7500 [==============================] - 2s 233us/step - loss: 0.1690 - acc: 0.7651 - val_loss: 0.2318 - val_acc: 0.7108
Epoch 00293: val_acc did not improve from 0.72480
Epoch 294/500
7500/7500 [==============================] - 2s 231us/step - loss: 0.1643 - acc: 0.7747 - val_loss: 0.2308 - val_acc: 0.7108
Epoch 00294: val_acc did not improve from 0.72480
Epoch 295/500
7500/7500 [==============================] - 2s 229us/step - loss: 0.1650 - acc: 0.7695 - val_loss: 0.2316 - val_acc: 0.7120
Epoch 00295: val_acc did not improve from 0.72480
Epoch 296/500
7500/7500 [==============================] - 2s 230us/step - loss: 0.1654 - acc: 0.7697 - val_loss: 0.2354 - val_acc: 0.7128
Epoch 00296: val_acc did not improve from 0.72480
Epoch 297/500
7500/7500 [==============================] - 2s 232us/step - loss: 0.1642 - acc: 0.7727 - val_loss: 0.2339 - val_acc: 0.7136
Epoch 00297: val_acc did not improve from 0.72480
Epoch 298/500
7500/7500 [==============================] - 2s 232us/step - loss: 0.1651 - acc: 0.7715 - val_loss: 0.2347 - val_acc: 0.7128
Epoch 00298: val_acc did not improve from 0.72480
Epoch 299/500
7500/7500 [==============================] - 2s 235us/step - loss: 0.1643 - acc: 0.7716 - val_loss: 0.2336 - val_acc: 0.7096
Epoch 00299: val_acc did not improve from 0.72480
Epoch 300/500
7500/7500 [==============================] - 2s 252us/step - loss: 0.1645 - acc: 0.7707 - val_loss: 0.2333 - val_acc: 0.7108
Epoch 00300: val_acc did not improve from 0.72480
Epoch 301/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1662 - acc: 0.7720 - val_loss: 0.2333 - val_acc: 0.7096
Epoch 00301: val_acc did not improve from 0.72480
Epoch 302/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1667 - acc: 0.7677 - val_loss: 0.2344 - val_acc: 0.7104
Epoch 00302: val_acc did not improve from 0.72480
Epoch 303/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1650 - acc: 0.7723 - val_loss: 0.2370 - val_acc: 0.7096
Epoch 00303: val_acc did not improve from 0.72480
Epoch 304/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1640 - acc: 0.7713 - val_loss: 0.2382 - val_acc: 0.7108
Epoch 00304: val_acc did not improve from 0.72480
Epoch 305/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1645 - acc: 0.7699 - val_loss: 0.2380 - val_acc: 0.7108
Epoch 00305: val_acc did not improve from 0.72480
Epoch 306/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1640 - acc: 0.7752 - val_loss: 0.2354 - val_acc: 0.7096
Epoch 00306: val_acc did not improve from 0.72480
Epoch 307/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1641 - acc: 0.7728 - val_loss: 0.2377 - val_acc: 0.7100
Epoch 00307: val_acc did not improve from 0.72480
Epoch 308/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1659 - acc: 0.7669 - val_loss: 0.2389 - val_acc: 0.7116
Epoch 00308: val_acc did not improve from 0.72480
Epoch 309/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1633 - acc: 0.7729 - val_loss: 0.2394 - val_acc: 0.7104
Epoch 00309: val_acc did not improve from 0.72480
Epoch 310/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1643 - acc: 0.7687 - val_loss: 0.2370 - val_acc: 0.7096
Epoch 00310: val_acc did not improve from 0.72480
Epoch 311/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1635 - acc: 0.7728 - val_loss: 0.2370 - val_acc: 0.7092
Epoch 00311: val_acc did not improve from 0.72480
Epoch 312/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1648 - acc: 0.7728 - val_loss: 0.2374 - val_acc: 0.7092
Epoch 00312: val_acc did not improve from 0.72480
Epoch 313/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1650 - acc: 0.7707 - val_loss: 0.2377 - val_acc: 0.7092
Epoch 00313: val_acc did not improve from 0.72480
Epoch 314/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1641 - acc: 0.7693 - val_loss: 0.2387 - val_acc: 0.7116
Epoch 00314: val_acc did not improve from 0.72480
Epoch 315/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1669 - acc: 0.7660 - val_loss: 0.2393 - val_acc: 0.7128
Epoch 00315: val_acc did not improve from 0.72480
Epoch 316/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1640 - acc: 0.7697 - val_loss: 0.2421 - val_acc: 0.7124
Epoch 00316: val_acc did not improve from 0.72480
Epoch 317/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1652 - acc: 0.7691 - val_loss: 0.2390 - val_acc: 0.7128
Epoch 00317: val_acc did not improve from 0.72480
Epoch 318/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1648 - acc: 0.7701 - val_loss: 0.2394 - val_acc: 0.7136
Epoch 00318: val_acc did not improve from 0.72480
Epoch 319/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1637 - acc: 0.7707 - val_loss: 0.2382 - val_acc: 0.7100
Epoch 00319: val_acc did not improve from 0.72480
Epoch 320/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1631 - acc: 0.7720 - val_loss: 0.2437 - val_acc: 0.7088
Epoch 00320: val_acc did not improve from 0.72480
Epoch 321/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1637 - acc: 0.7735 - val_loss: 0.2398 - val_acc: 0.7112
Epoch 00321: val_acc did not improve from 0.72480
Epoch 322/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1638 - acc: 0.7715 - val_loss: 0.2416 - val_acc: 0.7140
Epoch 00322: val_acc did not improve from 0.72480
Epoch 323/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1640 - acc: 0.7708 - val_loss: 0.2383 - val_acc: 0.7100
Epoch 00323: val_acc did not improve from 0.72480
Epoch 324/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1630 - acc: 0.7735 - val_loss: 0.2384 - val_acc: 0.7112
Epoch 00324: val_acc did not improve from 0.72480
Epoch 325/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1628 - acc: 0.7768 - val_loss: 0.2407 - val_acc: 0.7092
Epoch 00325: val_acc did not improve from 0.72480
Epoch 326/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1639 - acc: 0.7716 - val_loss: 0.2425 - val_acc: 0.7112
Epoch 00326: val_acc did not improve from 0.72480
Epoch 327/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1650 - acc: 0.7676 - val_loss: 0.2402 - val_acc: 0.7108
Epoch 00327: val_acc did not improve from 0.72480
Epoch 328/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1636 - acc: 0.7720 - val_loss: 0.2441 - val_acc: 0.7148
Epoch 00328: val_acc did not improve from 0.72480
Epoch 329/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1638 - acc: 0.7728 - val_loss: 0.2399 - val_acc: 0.7104
Epoch 00329: val_acc did not improve from 0.72480
Epoch 330/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1630 - acc: 0.7731 - val_loss: 0.2396 - val_acc: 0.7104
Epoch 00330: val_acc did not improve from 0.72480
Epoch 331/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1649 - acc: 0.7699 - val_loss: 0.2422 - val_acc: 0.7112
Epoch 00331: val_acc did not improve from 0.72480
Epoch 332/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1644 - acc: 0.7697 - val_loss: 0.2421 - val_acc: 0.7116
Epoch 00332: val_acc did not improve from 0.72480
Epoch 333/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1632 - acc: 0.7715 - val_loss: 0.2446 - val_acc: 0.7128
Epoch 00333: val_acc did not improve from 0.72480
Epoch 334/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1633 - acc: 0.7720 - val_loss: 0.2402 - val_acc: 0.7100
Epoch 00334: val_acc did not improve from 0.72480
Epoch 335/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1637 - acc: 0.7739 - val_loss: 0.2406 - val_acc: 0.7116
Epoch 00335: val_acc did not improve from 0.72480
Epoch 336/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1640 - acc: 0.7712 - val_loss: 0.2419 - val_acc: 0.7116
Epoch 00336: val_acc did not improve from 0.72480
Epoch 337/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1635 - acc: 0.7735 - val_loss: 0.2420 - val_acc: 0.7108
Epoch 00337: val_acc did not improve from 0.72480
Epoch 338/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1639 - acc: 0.7687 - val_loss: 0.2417 - val_acc: 0.7116
Epoch 00338: val_acc did not improve from 0.72480
Epoch 339/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1621 - acc: 0.7780 - val_loss: 0.2438 - val_acc: 0.7116
Epoch 00339: val_acc did not improve from 0.72480
Epoch 340/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1626 - acc: 0.7719 - val_loss: 0.2440 - val_acc: 0.7120
Epoch 00340: val_acc did not improve from 0.72480
Epoch 341/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1626 - acc: 0.7716 - val_loss: 0.2427 - val_acc: 0.7124
Epoch 00341: val_acc did not improve from 0.72480
Epoch 342/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1619 - acc: 0.7735 - val_loss: 0.2442 - val_acc: 0.7136
Epoch 00342: val_acc did not improve from 0.72480
Epoch 343/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1632 - acc: 0.7740 - val_loss: 0.2436 - val_acc: 0.7124
Epoch 00343: val_acc did not improve from 0.72480
Epoch 344/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1649 - acc: 0.7661 - val_loss: 0.2424 - val_acc: 0.7124
Epoch 00344: val_acc did not improve from 0.72480
Epoch 345/500
7500/7500 [==============================] - 2s 281us/step - loss: 0.1631 - acc: 0.7736 - val_loss: 0.2434 - val_acc: 0.7112
Epoch 00345: val_acc did not improve from 0.72480
Epoch 346/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1649 - acc: 0.7705 - val_loss: 0.2476 - val_acc: 0.7104
Epoch 00346: val_acc did not improve from 0.72480
Epoch 347/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1642 - acc: 0.7693 - val_loss: 0.2448 - val_acc: 0.7092
Epoch 00347: val_acc did not improve from 0.72480
Epoch 348/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1646 - acc: 0.7669 - val_loss: 0.2451 - val_acc: 0.7116
Epoch 00348: val_acc did not improve from 0.72480
Epoch 349/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7732 - val_loss: 0.2441 - val_acc: 0.7112
Epoch 00349: val_acc did not improve from 0.72480
Epoch 350/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1632 - acc: 0.7716 - val_loss: 0.2435 - val_acc: 0.7112
Epoch 00350: val_acc did not improve from 0.72480
Epoch 351/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1646 - acc: 0.7723 - val_loss: 0.2455 - val_acc: 0.7112
Epoch 00351: val_acc did not improve from 0.72480
Epoch 352/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1613 - acc: 0.7741 - val_loss: 0.2432 - val_acc: 0.7108
Epoch 00352: val_acc did not improve from 0.72480
Epoch 353/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1629 - acc: 0.7697 - val_loss: 0.2458 - val_acc: 0.7108
Epoch 00353: val_acc did not improve from 0.72480
Epoch 354/500
7500/7500 [==============================] - 2s 265us/step - loss: 0.1615 - acc: 0.7733 - val_loss: 0.2489 - val_acc: 0.7108
Epoch 00354: val_acc did not improve from 0.72480
Epoch 355/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1618 - acc: 0.7729 - val_loss: 0.2440 - val_acc: 0.7100
Epoch 00355: val_acc did not improve from 0.72480
Epoch 356/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1627 - acc: 0.7699 - val_loss: 0.2434 - val_acc: 0.7096
Epoch 00356: val_acc did not improve from 0.72480
Epoch 357/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1622 - acc: 0.7744 - val_loss: 0.2451 - val_acc: 0.7112
Epoch 00357: val_acc did not improve from 0.72480
Epoch 358/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1633 - acc: 0.7665 - val_loss: 0.2482 - val_acc: 0.7112
Epoch 00358: val_acc did not improve from 0.72480
Epoch 359/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1616 - acc: 0.7765 - val_loss: 0.2468 - val_acc: 0.7096
Epoch 00359: val_acc did not improve from 0.72480
Epoch 360/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7727 - val_loss: 0.2495 - val_acc: 0.7080
Epoch 00360: val_acc did not improve from 0.72480
Epoch 361/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1640 - acc: 0.7677 - val_loss: 0.2452 - val_acc: 0.7100
Epoch 00361: val_acc did not improve from 0.72480
Epoch 362/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1626 - acc: 0.7715 - val_loss: 0.2490 - val_acc: 0.7076
Epoch 00362: val_acc did not improve from 0.72480
Epoch 363/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1651 - acc: 0.7657 - val_loss: 0.2485 - val_acc: 0.7088
Epoch 00363: val_acc did not improve from 0.72480
Epoch 364/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7781 - val_loss: 0.2512 - val_acc: 0.7068
Epoch 00364: val_acc did not improve from 0.72480
Epoch 365/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1624 - acc: 0.7703 - val_loss: 0.2482 - val_acc: 0.7108
Epoch 00365: val_acc did not improve from 0.72480
Epoch 366/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7744 - val_loss: 0.2484 - val_acc: 0.7108
Epoch 00366: val_acc did not improve from 0.72480
Epoch 367/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7697 - val_loss: 0.2471 - val_acc: 0.7108
Epoch 00367: val_acc did not improve from 0.72480
Epoch 368/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1615 - acc: 0.7760 - val_loss: 0.2495 - val_acc: 0.7088
Epoch 00368: val_acc did not improve from 0.72480
Epoch 369/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.2496 - val_acc: 0.7096
Epoch 00369: val_acc did not improve from 0.72480
Epoch 370/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1638 - acc: 0.7685 - val_loss: 0.2493 - val_acc: 0.7076
Epoch 00370: val_acc did not improve from 0.72480
Epoch 371/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1613 - acc: 0.7727 - val_loss: 0.2490 - val_acc: 0.7108
Epoch 00371: val_acc did not improve from 0.72480
Epoch 372/500
7500/7500 [==============================] - 2s 266us/step - loss: 0.1611 - acc: 0.7732 - val_loss: 0.2507 - val_acc: 0.7112
Epoch 00372: val_acc did not improve from 0.72480
Epoch 373/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1634 - acc: 0.7685 - val_loss: 0.2493 - val_acc: 0.7104
Epoch 00373: val_acc did not improve from 0.72480
Epoch 374/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1608 - acc: 0.7733 - val_loss: 0.2472 - val_acc: 0.7120
Epoch 00374: val_acc did not improve from 0.72480
Epoch 375/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1649 - acc: 0.7671 - val_loss: 0.2481 - val_acc: 0.7096
Epoch 00375: val_acc did not improve from 0.72480
Epoch 376/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1636 - acc: 0.7707 - val_loss: 0.2494 - val_acc: 0.7104
Epoch 00376: val_acc did not improve from 0.72480
Epoch 377/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1611 - acc: 0.7691 - val_loss: 0.2478 - val_acc: 0.7104
Epoch 00377: val_acc did not improve from 0.72480
Epoch 378/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1626 - acc: 0.7687 - val_loss: 0.2485 - val_acc: 0.7104
Epoch 00378: val_acc did not improve from 0.72480
Epoch 379/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7731 - val_loss: 0.2494 - val_acc: 0.7112
Epoch 00379: val_acc did not improve from 0.72480
Epoch 380/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1606 - acc: 0.7735 - val_loss: 0.2503 - val_acc: 0.7092
Epoch 00380: val_acc did not improve from 0.72480
Epoch 381/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1620 - acc: 0.7709 - val_loss: 0.2539 - val_acc: 0.7072
Epoch 00381: val_acc did not improve from 0.72480
Epoch 382/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1614 - acc: 0.7717 - val_loss: 0.2494 - val_acc: 0.7104
Epoch 00382: val_acc did not improve from 0.72480
Epoch 383/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1598 - acc: 0.7748 - val_loss: 0.2472 - val_acc: 0.7076
Epoch 00383: val_acc did not improve from 0.72480
Epoch 384/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1606 - acc: 0.7759 - val_loss: 0.2486 - val_acc: 0.7092
Epoch 00384: val_acc did not improve from 0.72480
Epoch 385/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1623 - acc: 0.7712 - val_loss: 0.2485 - val_acc: 0.7108
Epoch 00385: val_acc did not improve from 0.72480
Epoch 386/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7707 - val_loss: 0.2480 - val_acc: 0.7112
Epoch 00386: val_acc did not improve from 0.72480
Epoch 387/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1600 - acc: 0.7748 - val_loss: 0.2519 - val_acc: 0.7100
Epoch 00387: val_acc did not improve from 0.72480
Epoch 388/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1624 - acc: 0.7715 - val_loss: 0.2501 - val_acc: 0.7112
Epoch 00388: val_acc did not improve from 0.72480
Epoch 389/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1643 - acc: 0.7675 - val_loss: 0.2541 - val_acc: 0.7088
Epoch 00389: val_acc did not improve from 0.72480
Epoch 390/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1619 - acc: 0.7709 - val_loss: 0.2472 - val_acc: 0.7104
Epoch 00390: val_acc did not improve from 0.72480
Epoch 391/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7685 - val_loss: 0.2520 - val_acc: 0.7104
Epoch 00391: val_acc did not improve from 0.72480
Epoch 392/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1622 - acc: 0.7677 - val_loss: 0.2485 - val_acc: 0.7092
Epoch 00392: val_acc did not improve from 0.72480
Epoch 393/500
7500/7500 [==============================] - 2s 276us/step - loss: 0.1600 - acc: 0.7745 - val_loss: 0.2507 - val_acc: 0.7092
Epoch 00393: val_acc did not improve from 0.72480
Epoch 394/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1574 - acc: 0.7797 - val_loss: 0.2486 - val_acc: 0.7104
Epoch 00394: val_acc did not improve from 0.72480
Epoch 395/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1610 - acc: 0.7673 - val_loss: 0.2502 - val_acc: 0.7104
Epoch 00395: val_acc did not improve from 0.72480
Epoch 396/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1610 - acc: 0.7707 - val_loss: 0.2522 - val_acc: 0.7128
Epoch 00396: val_acc did not improve from 0.72480
Epoch 397/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1604 - acc: 0.7736 - val_loss: 0.2551 - val_acc: 0.7120
Epoch 00397: val_acc did not improve from 0.72480
Epoch 398/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1609 - acc: 0.7756 - val_loss: 0.2525 - val_acc: 0.7132
Epoch 00398: val_acc did not improve from 0.72480
Epoch 399/500
7500/7500 [==============================] - 2s 287us/step - loss: 0.1602 - acc: 0.7723 - val_loss: 0.2551 - val_acc: 0.7096
Epoch 00399: val_acc did not improve from 0.72480
Epoch 400/500
7500/7500 [==============================] - 2s 282us/step - loss: 0.1634 - acc: 0.7661 - val_loss: 0.2564 - val_acc: 0.7100
Epoch 00400: val_acc did not improve from 0.72480
Epoch 401/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1611 - acc: 0.7696 - val_loss: 0.2540 - val_acc: 0.7112
Epoch 00401: val_acc did not improve from 0.72480
Epoch 402/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1600 - acc: 0.7727 - val_loss: 0.2528 - val_acc: 0.7128
Epoch 00402: val_acc did not improve from 0.72480
Epoch 403/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1597 - acc: 0.7728 - val_loss: 0.2572 - val_acc: 0.7084
Epoch 00403: val_acc did not improve from 0.72480
Epoch 404/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1633 - acc: 0.7693 - val_loss: 0.2540 - val_acc: 0.7112
Epoch 00404: val_acc did not improve from 0.72480
Epoch 405/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7736 - val_loss: 0.2533 - val_acc: 0.7104
Epoch 00405: val_acc did not improve from 0.72480
Epoch 406/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7727 - val_loss: 0.2555 - val_acc: 0.7116
Epoch 00406: val_acc did not improve from 0.72480
Epoch 407/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1619 - acc: 0.7703 - val_loss: 0.2528 - val_acc: 0.7108
Epoch 00407: val_acc did not improve from 0.72480
Epoch 408/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1607 - acc: 0.7723 - val_loss: 0.2549 - val_acc: 0.7116
Epoch 00408: val_acc did not improve from 0.72480
Epoch 409/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1595 - acc: 0.7769 - val_loss: 0.2515 - val_acc: 0.7112
Epoch 00409: val_acc did not improve from 0.72480
Epoch 410/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1593 - acc: 0.7744 - val_loss: 0.2549 - val_acc: 0.7124
Epoch 00410: val_acc did not improve from 0.72480
Epoch 411/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1622 - acc: 0.7691 - val_loss: 0.2546 - val_acc: 0.7116
Epoch 00411: val_acc did not improve from 0.72480
Epoch 412/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1598 - acc: 0.7779 - val_loss: 0.2560 - val_acc: 0.7084
Epoch 00412: val_acc did not improve from 0.72480
Epoch 413/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1605 - acc: 0.7752 - val_loss: 0.2583 - val_acc: 0.7096
Epoch 00413: val_acc did not improve from 0.72480
Epoch 414/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1604 - acc: 0.7761 - val_loss: 0.2535 - val_acc: 0.7100
Epoch 00414: val_acc did not improve from 0.72480
Epoch 415/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1594 - acc: 0.7756 - val_loss: 0.2571 - val_acc: 0.7088
Epoch 00415: val_acc did not improve from 0.72480
Epoch 416/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1606 - acc: 0.7735 - val_loss: 0.2542 - val_acc: 0.7128
Epoch 00416: val_acc did not improve from 0.72480
Epoch 417/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1593 - acc: 0.7771 - val_loss: 0.2584 - val_acc: 0.7096
Epoch 00417: val_acc did not improve from 0.72480
Epoch 418/500
7500/7500 [==============================] - 2s 278us/step - loss: 0.1601 - acc: 0.7759 - val_loss: 0.2539 - val_acc: 0.7108
Epoch 00418: val_acc did not improve from 0.72480
Epoch 419/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7711 - val_loss: 0.2551 - val_acc: 0.7108
Epoch 00419: val_acc did not improve from 0.72480
Epoch 420/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1579 - acc: 0.7788 - val_loss: 0.2569 - val_acc: 0.7092
Epoch 00420: val_acc did not improve from 0.72480
Epoch 421/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1604 - acc: 0.7720 - val_loss: 0.2577 - val_acc: 0.7076
Epoch 00421: val_acc did not improve from 0.72480
Epoch 422/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7701 - val_loss: 0.2573 - val_acc: 0.7096
Epoch 00422: val_acc did not improve from 0.72480
Epoch 423/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1603 - acc: 0.7747 - val_loss: 0.2557 - val_acc: 0.7120
Epoch 00423: val_acc did not improve from 0.72480
Epoch 424/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1609 - acc: 0.7721 - val_loss: 0.2574 - val_acc: 0.7112
Epoch 00424: val_acc did not improve from 0.72480
Epoch 425/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1617 - acc: 0.7700 - val_loss: 0.2566 - val_acc: 0.7120
Epoch 00425: val_acc did not improve from 0.72480
Epoch 426/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1614 - acc: 0.7727 - val_loss: 0.2562 - val_acc: 0.7108
Epoch 00426: val_acc did not improve from 0.72480
Epoch 427/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1610 - acc: 0.7713 - val_loss: 0.2614 - val_acc: 0.7064
Epoch 00427: val_acc did not improve from 0.72480
Epoch 428/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1603 - acc: 0.7707 - val_loss: 0.2578 - val_acc: 0.7104
Epoch 00428: val_acc did not improve from 0.72480
Epoch 429/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7697 - val_loss: 0.2574 - val_acc: 0.7084
Epoch 00429: val_acc did not improve from 0.72480
Epoch 430/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1609 - acc: 0.7733 - val_loss: 0.2563 - val_acc: 0.7108
Epoch 00430: val_acc did not improve from 0.72480
Epoch 431/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1609 - acc: 0.7720 - val_loss: 0.2584 - val_acc: 0.7092
Epoch 00431: val_acc did not improve from 0.72480
Epoch 432/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7709 - val_loss: 0.2584 - val_acc: 0.7104
Epoch 00432: val_acc did not improve from 0.72480
Epoch 433/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1604 - acc: 0.7695 - val_loss: 0.2590 - val_acc: 0.7084
Epoch 00433: val_acc did not improve from 0.72480
Epoch 434/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1603 - acc: 0.7715 - val_loss: 0.2626 - val_acc: 0.7064
Epoch 00434: val_acc did not improve from 0.72480
Epoch 435/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1600 - acc: 0.7700 - val_loss: 0.2615 - val_acc: 0.7108
Epoch 00435: val_acc did not improve from 0.72480
Epoch 436/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1602 - acc: 0.7761 - val_loss: 0.2567 - val_acc: 0.7100
Epoch 00436: val_acc did not improve from 0.72480
Epoch 437/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1603 - acc: 0.7717 - val_loss: 0.2563 - val_acc: 0.7104
Epoch 00437: val_acc did not improve from 0.72480
Epoch 438/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1607 - acc: 0.7712 - val_loss: 0.2597 - val_acc: 0.7104
Epoch 00438: val_acc did not improve from 0.72480
Epoch 439/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1607 - acc: 0.7735 - val_loss: 0.2611 - val_acc: 0.7104
Epoch 00439: val_acc did not improve from 0.72480
Epoch 440/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1597 - acc: 0.7744 - val_loss: 0.2596 - val_acc: 0.7112
Epoch 00440: val_acc did not improve from 0.72480
Epoch 441/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1580 - acc: 0.7719 - val_loss: 0.2619 - val_acc: 0.7124
Epoch 00441: val_acc did not improve from 0.72480
Epoch 442/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1627 - acc: 0.7665 - val_loss: 0.2577 - val_acc: 0.7124
Epoch 00442: val_acc did not improve from 0.72480
Epoch 443/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1606 - acc: 0.7729 - val_loss: 0.2569 - val_acc: 0.7116
Epoch 00443: val_acc did not improve from 0.72480
Epoch 444/500
7500/7500 [==============================] - 2s 286us/step - loss: 0.1607 - acc: 0.7712 - val_loss: 0.2523 - val_acc: 0.7112
Epoch 00444: val_acc did not improve from 0.72480
Epoch 445/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7715 - val_loss: 0.2573 - val_acc: 0.7112
Epoch 00445: val_acc did not improve from 0.72480
Epoch 446/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7679 - val_loss: 0.2610 - val_acc: 0.7096
Epoch 00446: val_acc did not improve from 0.72480
Epoch 447/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1616 - acc: 0.7697 - val_loss: 0.2581 - val_acc: 0.7108
Epoch 00447: val_acc did not improve from 0.72480
Epoch 448/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1612 - acc: 0.7720 - val_loss: 0.2594 - val_acc: 0.7100
Epoch 00448: val_acc did not improve from 0.72480
Epoch 449/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1605 - acc: 0.7728 - val_loss: 0.2594 - val_acc: 0.7088
Epoch 00449: val_acc did not improve from 0.72480
Epoch 450/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1621 - acc: 0.7709 - val_loss: 0.2582 - val_acc: 0.7096
Epoch 00450: val_acc did not improve from 0.72480
Epoch 451/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1595 - acc: 0.7752 - val_loss: 0.2601 - val_acc: 0.7116
Epoch 00451: val_acc did not improve from 0.72480
Epoch 452/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1625 - acc: 0.7663 - val_loss: 0.2605 - val_acc: 0.7092
Epoch 00452: val_acc did not improve from 0.72480
Epoch 453/500
7500/7500 [==============================] - 2s 282us/step - loss: 0.1571 - acc: 0.7775 - val_loss: 0.2628 - val_acc: 0.7116
Epoch 00453: val_acc did not improve from 0.72480
Epoch 454/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1614 - acc: 0.7697 - val_loss: 0.2616 - val_acc: 0.7088
Epoch 00454: val_acc did not improve from 0.72480
Epoch 455/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7712 - val_loss: 0.2649 - val_acc: 0.7104
Epoch 00455: val_acc did not improve from 0.72480
Epoch 456/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1601 - acc: 0.7691 - val_loss: 0.2601 - val_acc: 0.7104
Epoch 00456: val_acc did not improve from 0.72480
Epoch 457/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1609 - acc: 0.7708 - val_loss: 0.2644 - val_acc: 0.7068
Epoch 00457: val_acc did not improve from 0.72480
Epoch 458/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1597 - acc: 0.7735 - val_loss: 0.2658 - val_acc: 0.7060
Epoch 00458: val_acc did not improve from 0.72480
Epoch 459/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1595 - acc: 0.7728 - val_loss: 0.2651 - val_acc: 0.7104
Epoch 00459: val_acc did not improve from 0.72480
Epoch 460/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1608 - acc: 0.7737 - val_loss: 0.2613 - val_acc: 0.7096
Epoch 00460: val_acc did not improve from 0.72480
Epoch 461/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1587 - acc: 0.7747 - val_loss: 0.2648 - val_acc: 0.7084
Epoch 00461: val_acc did not improve from 0.72480
Epoch 462/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1596 - acc: 0.7732 - val_loss: 0.2693 - val_acc: 0.7076
Epoch 00462: val_acc did not improve from 0.72480
Epoch 463/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1600 - acc: 0.7696 - val_loss: 0.2661 - val_acc: 0.7064
Epoch 00463: val_acc did not improve from 0.72480
Epoch 464/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1600 - acc: 0.7740 - val_loss: 0.2622 - val_acc: 0.7128
Epoch 00464: val_acc did not improve from 0.72480
Epoch 465/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1588 - acc: 0.7749 - val_loss: 0.2657 - val_acc: 0.7076
Epoch 00465: val_acc did not improve from 0.72480
Epoch 466/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7707 - val_loss: 0.2673 - val_acc: 0.7068
Epoch 00466: val_acc did not improve from 0.72480
Epoch 467/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1594 - acc: 0.7741 - val_loss: 0.2629 - val_acc: 0.7088
Epoch 00467: val_acc did not improve from 0.72480
Epoch 468/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1607 - acc: 0.7675 - val_loss: 0.2636 - val_acc: 0.7080
Epoch 00468: val_acc did not improve from 0.72480
Epoch 469/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1583 - acc: 0.7748 - val_loss: 0.2645 - val_acc: 0.7088
Epoch 00469: val_acc did not improve from 0.72480
Epoch 470/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1597 - acc: 0.7721 - val_loss: 0.2623 - val_acc: 0.7088
Epoch 00470: val_acc did not improve from 0.72480
Epoch 471/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1581 - acc: 0.7736 - val_loss: 0.2569 - val_acc: 0.7100
Epoch 00471: val_acc did not improve from 0.72480
Epoch 472/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1581 - acc: 0.7735 - val_loss: 0.2552 - val_acc: 0.7104
Epoch 00472: val_acc did not improve from 0.72480
Epoch 473/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1590 - acc: 0.7729 - val_loss: 0.2572 - val_acc: 0.7100
Epoch 00473: val_acc did not improve from 0.72480
Epoch 474/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1601 - acc: 0.7700 - val_loss: 0.2568 - val_acc: 0.7112
Epoch 00474: val_acc did not improve from 0.72480
Epoch 475/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1586 - acc: 0.7709 - val_loss: 0.2591 - val_acc: 0.7092
Epoch 00475: val_acc did not improve from 0.72480
Epoch 476/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1583 - acc: 0.7753 - val_loss: 0.2535 - val_acc: 0.7120
Epoch 00476: val_acc did not improve from 0.72480
Epoch 477/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1580 - acc: 0.7771 - val_loss: 0.2562 - val_acc: 0.7104
Epoch 00477: val_acc did not improve from 0.72480
Epoch 478/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1569 - acc: 0.7776 - val_loss: 0.2549 - val_acc: 0.7100
Epoch 00478: val_acc did not improve from 0.72480
Epoch 479/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1564 - acc: 0.7753 - val_loss: 0.2558 - val_acc: 0.7116
Epoch 00479: val_acc did not improve from 0.72480
Epoch 480/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1580 - acc: 0.7707 - val_loss: 0.2524 - val_acc: 0.7124
Epoch 00480: val_acc did not improve from 0.72480
Epoch 481/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1569 - acc: 0.7760 - val_loss: 0.2564 - val_acc: 0.7092
Epoch 00481: val_acc did not improve from 0.72480
Epoch 482/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1564 - acc: 0.7781 - val_loss: 0.2499 - val_acc: 0.7120
Epoch 00482: val_acc did not improve from 0.72480
Epoch 483/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1579 - acc: 0.7723 - val_loss: 0.2533 - val_acc: 0.7092
Epoch 00483: val_acc did not improve from 0.72480
Epoch 484/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1563 - acc: 0.7764 - val_loss: 0.2541 - val_acc: 0.7124
Epoch 00484: val_acc did not improve from 0.72480
Epoch 485/500
7500/7500 [==============================] - 2s 271us/step - loss: 0.1564 - acc: 0.7785 - val_loss: 0.2530 - val_acc: 0.7140
Epoch 00485: val_acc did not improve from 0.72480
Epoch 486/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1558 - acc: 0.7748 - val_loss: 0.2498 - val_acc: 0.7120
Epoch 00486: val_acc did not improve from 0.72480
Epoch 487/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1565 - acc: 0.7779 - val_loss: 0.2520 - val_acc: 0.7132
Epoch 00487: val_acc did not improve from 0.72480
Epoch 488/500
7500/7500 [==============================] - 2s 272us/step - loss: 0.1560 - acc: 0.7765 - val_loss: 0.2504 - val_acc: 0.7124
Epoch 00488: val_acc did not improve from 0.72480
Epoch 489/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1552 - acc: 0.7769 - val_loss: 0.2523 - val_acc: 0.7120
Epoch 00489: val_acc did not improve from 0.72480
Epoch 490/500
7500/7500 [==============================] - 2s 275us/step - loss: 0.1555 - acc: 0.7765 - val_loss: 0.2506 - val_acc: 0.7112
Epoch 00490: val_acc did not improve from 0.72480
Epoch 491/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1563 - acc: 0.7749 - val_loss: 0.2520 - val_acc: 0.7120
Epoch 00491: val_acc did not improve from 0.72480
Epoch 492/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1571 - acc: 0.7737 - val_loss: 0.2518 - val_acc: 0.7104
Epoch 00492: val_acc did not improve from 0.72480
Epoch 493/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1554 - acc: 0.7785 - val_loss: 0.2530 - val_acc: 0.7128
Epoch 00493: val_acc did not improve from 0.72480
Epoch 494/500
7500/7500 [==============================] - 2s 274us/step - loss: 0.1544 - acc: 0.7799 - val_loss: 0.2570 - val_acc: 0.7104
Epoch 00494: val_acc did not improve from 0.72480
Epoch 495/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1560 - acc: 0.7768 - val_loss: 0.2531 - val_acc: 0.7116
Epoch 00495: val_acc did not improve from 0.72480
Epoch 496/500
7500/7500 [==============================] - 2s 269us/step - loss: 0.1551 - acc: 0.7784 - val_loss: 0.2563 - val_acc: 0.7096
Epoch 00496: val_acc did not improve from 0.72480
Epoch 497/500
7500/7500 [==============================] - 2s 268us/step - loss: 0.1568 - acc: 0.7759 - val_loss: 0.2539 - val_acc: 0.7120
Epoch 00497: val_acc did not improve from 0.72480
Epoch 498/500
7500/7500 [==============================] - 2s 267us/step - loss: 0.1554 - acc: 0.7765 - val_loss: 0.2509 - val_acc: 0.7124
Epoch 00498: val_acc did not improve from 0.72480
Epoch 499/500
7500/7500 [==============================] - 2s 273us/step - loss: 0.1536 - acc: 0.7797 - val_loss: 0.2508 - val_acc: 0.7120
Epoch 00499: val_acc did not improve from 0.72480
Epoch 500/500
7500/7500 [==============================] - 2s 270us/step - loss: 0.1539 - acc: 0.7824 - val_loss: 0.2529 - val_acc: 0.7100
Epoch 00500: val_acc did not improve from 0.72480
acc: 72.48%
| MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Bulid the 2 dimensions LSTM model As for the data we have, we only have 1 output and that means we only have 1 time step, if we can delete that dimension in that model, then we can have a 2 dimensions LSTM model. Load the data again | X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)
num_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path)
test_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size) | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Build up the model | # Define the model parameters
batch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data
epochs = 50
dropout = 0.3
learning_rate = 0.01
# Define the checkpointer to allow saving of models
model_type = 'lstm_sequential_2d_onehot'
save_path = SAVE_DIR + model_type + '.hdf5'
checkpointer = ModelCheckpoint(monitor='val_acc',
filepath=save_path,
verbose=1,
save_best_only=True)
# Define the model
model = Sequential()
# Build up the layers
model.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))
model.add(Dropout(dropout))
model.add(Dense(50, activation='sigmoid'))
# model.add(Dense(25, activation='sigmoid'))
# model.add(Dense(12, activation='sigmoid'))
# model.add(Dense(6, activation='sigmoid'))
# model.add(Dense(3, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
print(model.summary()) | WARNING:tensorflow:From C:\Users\Lisboa\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\Lisboa\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 100) 42400
_________________________________________________________________
dropout_1 (Dropout) (None, 100) 0
_________________________________________________________________
dense_1 (Dense) (None, 50) 5050
_________________________________________________________________
dense_2 (Dense) (None, 1) 51
=================================================================
Total params: 47,501
Trainable params: 47,501
Non-trainable params: 0
_________________________________________________________________
None
| MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Fit and Evaluate the model | # Fit
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs,verbose=1,
validation_data=(X_test, y_test), callbacks=[checkpointer])
# Evaluate
score = max(history.history['val_acc'])
print("%s: %.2f%%" % (model.metrics_names[1], score*100))
plt = construct.plot_results(history.history)
plt.show() | WARNING:tensorflow:From C:\Users\Lisboa\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 7500 samples, validate on 2500 samples
Epoch 1/500
7500/7500 [==============================] - 6s 855us/step - loss: 0.2107 - acc: 0.6719 - val_loss: 0.1957 - val_acc: 0.7008
Epoch 00001: val_acc improved from -inf to 0.70080, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 2/500
7500/7500 [==============================] - 6s 755us/step - loss: 0.1912 - acc: 0.7187 - val_loss: 0.1911 - val_acc: 0.7304
Epoch 00002: val_acc improved from 0.70080 to 0.73040, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 3/500
7500/7500 [==============================] - 6s 735us/step - loss: 0.1859 - acc: 0.7241 - val_loss: 0.1872 - val_acc: 0.7116
Epoch 00003: val_acc did not improve from 0.73040
Epoch 4/500
7500/7500 [==============================] - 5s 730us/step - loss: 0.1807 - acc: 0.7387 - val_loss: 0.1804 - val_acc: 0.7344
Epoch 00004: val_acc improved from 0.73040 to 0.73440, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 5/500
7500/7500 [==============================] - 5s 710us/step - loss: 0.1771 - acc: 0.7419 - val_loss: 0.1632 - val_acc: 0.7628
Epoch 00005: val_acc improved from 0.73440 to 0.76280, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 6/500
7500/7500 [==============================] - 5s 685us/step - loss: 0.1732 - acc: 0.7492 - val_loss: 0.1672 - val_acc: 0.7528
Epoch 00006: val_acc did not improve from 0.76280
Epoch 7/500
7500/7500 [==============================] - 5s 691us/step - loss: 0.1692 - acc: 0.7588 - val_loss: 0.1605 - val_acc: 0.7716
Epoch 00007: val_acc improved from 0.76280 to 0.77160, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 8/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.1668 - acc: 0.7659 - val_loss: 0.1562 - val_acc: 0.7824
Epoch 00008: val_acc improved from 0.77160 to 0.78240, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 9/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.1624 - acc: 0.7704 - val_loss: 0.1764 - val_acc: 0.7528
Epoch 00009: val_acc did not improve from 0.78240
Epoch 10/500
7500/7500 [==============================] - 5s 660us/step - loss: 0.1589 - acc: 0.7749 - val_loss: 0.1555 - val_acc: 0.7796
Epoch 00010: val_acc did not improve from 0.78240
Epoch 11/500
7500/7500 [==============================] - 5s 654us/step - loss: 0.1566 - acc: 0.7779 - val_loss: 0.1450 - val_acc: 0.7932
Epoch 00011: val_acc improved from 0.78240 to 0.79320, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 12/500
7500/7500 [==============================] - 5s 659us/step - loss: 0.1494 - acc: 0.7923 - val_loss: 0.1880 - val_acc: 0.7312
Epoch 00012: val_acc did not improve from 0.79320
Epoch 13/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.1491 - acc: 0.7901 - val_loss: 0.1461 - val_acc: 0.7980
Epoch 00013: val_acc improved from 0.79320 to 0.79800, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 14/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.1450 - acc: 0.7987 - val_loss: 0.1365 - val_acc: 0.8124
Epoch 00014: val_acc improved from 0.79800 to 0.81240, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 15/500
7500/7500 [==============================] - 5s 661us/step - loss: 0.1455 - acc: 0.7984 - val_loss: 0.1490 - val_acc: 0.7948
Epoch 00015: val_acc did not improve from 0.81240
Epoch 16/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.1411 - acc: 0.8060 - val_loss: 0.1462 - val_acc: 0.7960
Epoch 00016: val_acc did not improve from 0.81240
Epoch 17/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.1394 - acc: 0.8064 - val_loss: 0.1446 - val_acc: 0.7908
Epoch 00017: val_acc did not improve from 0.81240
Epoch 18/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.1390 - acc: 0.8063 - val_loss: 0.1290 - val_acc: 0.8244
Epoch 00018: val_acc improved from 0.81240 to 0.82440, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 19/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.1400 - acc: 0.8059 - val_loss: 0.1333 - val_acc: 0.8128
Epoch 00019: val_acc did not improve from 0.82440
Epoch 20/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.1376 - acc: 0.8093 - val_loss: 0.1475 - val_acc: 0.7948
Epoch 00020: val_acc did not improve from 0.82440
Epoch 21/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.1347 - acc: 0.8155 - val_loss: 0.1319 - val_acc: 0.8136
Epoch 00021: val_acc did not improve from 0.82440
Epoch 22/500
7500/7500 [==============================] - 5s 629us/step - loss: 0.1323 - acc: 0.8172 - val_loss: 0.1340 - val_acc: 0.8080
Epoch 00022: val_acc did not improve from 0.82440
Epoch 23/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.1306 - acc: 0.8225 - val_loss: 0.1524 - val_acc: 0.7848
Epoch 00023: val_acc did not improve from 0.82440
Epoch 24/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.1322 - acc: 0.8167 - val_loss: 0.1321 - val_acc: 0.8156
Epoch 00024: val_acc did not improve from 0.82440
Epoch 25/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.1312 - acc: 0.8196 - val_loss: 0.2003 - val_acc: 0.7308
Epoch 00025: val_acc did not improve from 0.82440
Epoch 26/500
7500/7500 [==============================] - 5s 627us/step - loss: 0.1299 - acc: 0.8277 - val_loss: 0.1260 - val_acc: 0.8212
Epoch 00026: val_acc did not improve from 0.82440
Epoch 27/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.1287 - acc: 0.8293 - val_loss: 0.1286 - val_acc: 0.8188
Epoch 00027: val_acc did not improve from 0.82440
Epoch 28/500
7500/7500 [==============================] - 5s 722us/step - loss: 0.1280 - acc: 0.8244 - val_loss: 0.1257 - val_acc: 0.8276
Epoch 00028: val_acc improved from 0.82440 to 0.82760, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 29/500
7500/7500 [==============================] - 6s 740us/step - loss: 0.1251 - acc: 0.8317 - val_loss: 0.1204 - val_acc: 0.8336
Epoch 00029: val_acc improved from 0.82760 to 0.83360, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 30/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.1267 - acc: 0.8276 - val_loss: 0.1213 - val_acc: 0.8356
Epoch 00030: val_acc improved from 0.83360 to 0.83560, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 31/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.1243 - acc: 0.8339 - val_loss: 0.1483 - val_acc: 0.7948
Epoch 00031: val_acc did not improve from 0.83560
Epoch 32/500
7500/7500 [==============================] - 5s 703us/step - loss: 0.1248 - acc: 0.8328 - val_loss: 0.1208 - val_acc: 0.8328
Epoch 00032: val_acc did not improve from 0.83560
Epoch 33/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.1232 - acc: 0.8328 - val_loss: 0.1271 - val_acc: 0.8296
Epoch 00033: val_acc did not improve from 0.83560
Epoch 34/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.1227 - acc: 0.8347 - val_loss: 0.1294 - val_acc: 0.8224
Epoch 00034: val_acc did not improve from 0.83560
Epoch 35/500
7500/7500 [==============================] - 5s 727us/step - loss: 0.1203 - acc: 0.8385 - val_loss: 0.1238 - val_acc: 0.8292
Epoch 00035: val_acc did not improve from 0.83560
Epoch 36/500
7500/7500 [==============================] - 5s 671us/step - loss: 0.1217 - acc: 0.8352 - val_loss: 0.1247 - val_acc: 0.8240
Epoch 00036: val_acc did not improve from 0.83560
Epoch 37/500
7500/7500 [==============================] - 5s 710us/step - loss: 0.1201 - acc: 0.8377 - val_loss: 0.1198 - val_acc: 0.8352
Epoch 00037: val_acc did not improve from 0.83560
Epoch 38/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.1191 - acc: 0.8423 - val_loss: 0.1190 - val_acc: 0.8392
Epoch 00038: val_acc improved from 0.83560 to 0.83920, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 39/500
7500/7500 [==============================] - 5s 684us/step - loss: 0.1170 - acc: 0.8437 - val_loss: 0.1232 - val_acc: 0.8320
Epoch 00039: val_acc did not improve from 0.83920
Epoch 40/500
7500/7500 [==============================] - 5s 671us/step - loss: 0.1166 - acc: 0.8481 - val_loss: 0.1167 - val_acc: 0.8416
Epoch 00040: val_acc improved from 0.83920 to 0.84160, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 41/500
7500/7500 [==============================] - 5s 678us/step - loss: 0.1155 - acc: 0.8457 - val_loss: 0.1204 - val_acc: 0.8340
Epoch 00041: val_acc did not improve from 0.84160
Epoch 42/500
7500/7500 [==============================] - 5s 629us/step - loss: 0.1162 - acc: 0.8461 - val_loss: 0.1291 - val_acc: 0.8240
Epoch 00042: val_acc did not improve from 0.84160
Epoch 43/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.1144 - acc: 0.8484 - val_loss: 0.1208 - val_acc: 0.8344
Epoch 00043: val_acc did not improve from 0.84160
Epoch 44/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.1125 - acc: 0.8524 - val_loss: 0.1253 - val_acc: 0.8288
Epoch 00044: val_acc did not improve from 0.84160
Epoch 45/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.1136 - acc: 0.8492 - val_loss: 0.1170 - val_acc: 0.8400
Epoch 00045: val_acc did not improve from 0.84160
Epoch 46/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.1134 - acc: 0.8475 - val_loss: 0.1445 - val_acc: 0.7992
Epoch 00046: val_acc did not improve from 0.84160
Epoch 47/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.1100 - acc: 0.8556 - val_loss: 0.1169 - val_acc: 0.8420
Epoch 00047: val_acc improved from 0.84160 to 0.84200, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 48/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.1105 - acc: 0.8520 - val_loss: 0.1244 - val_acc: 0.8284
Epoch 00048: val_acc did not improve from 0.84200
Epoch 49/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.1105 - acc: 0.8555 - val_loss: 0.1208 - val_acc: 0.8452
Epoch 00049: val_acc improved from 0.84200 to 0.84520, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 50/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.1080 - acc: 0.8541 - val_loss: 0.1176 - val_acc: 0.8456
Epoch 00050: val_acc improved from 0.84520 to 0.84560, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 51/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.1077 - acc: 0.8592 - val_loss: 0.1267 - val_acc: 0.8288
Epoch 00051: val_acc did not improve from 0.84560
Epoch 52/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.1093 - acc: 0.8572 - val_loss: 0.1211 - val_acc: 0.8376
Epoch 00052: val_acc did not improve from 0.84560
Epoch 53/500
7500/7500 [==============================] - 5s 692us/step - loss: 0.1069 - acc: 0.8597 - val_loss: 0.1179 - val_acc: 0.8460
Epoch 00053: val_acc improved from 0.84560 to 0.84600, saving model to C:\Users\Lisboa\011019\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5
Epoch 54/500
7500/7500 [==============================] - 5s 677us/step - loss: 0.1059 - acc: 0.8583 - val_loss: 0.1441 - val_acc: 0.7984
Epoch 00054: val_acc did not improve from 0.84600
Epoch 55/500
7500/7500 [==============================] - 5s 689us/step - loss: 0.1045 - acc: 0.8636 - val_loss: 0.1591 - val_acc: 0.7872
Epoch 00055: val_acc did not improve from 0.84600
Epoch 56/500
7500/7500 [==============================] - 6s 734us/step - loss: 0.1026 - acc: 0.8647 - val_loss: 0.1213 - val_acc: 0.8372
Epoch 00056: val_acc did not improve from 0.84600
Epoch 57/500
7500/7500 [==============================] - 6s 756us/step - loss: 0.1041 - acc: 0.8624 - val_loss: 0.1211 - val_acc: 0.8368
Epoch 00057: val_acc did not improve from 0.84600
Epoch 58/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.1030 - acc: 0.8643 - val_loss: 0.1216 - val_acc: 0.8368
Epoch 00058: val_acc did not improve from 0.84600
Epoch 59/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.1021 - acc: 0.8669 - val_loss: 0.1189 - val_acc: 0.8428
Epoch 00059: val_acc did not improve from 0.84600
Epoch 60/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.1012 - acc: 0.8687 - val_loss: 0.1474 - val_acc: 0.7988
Epoch 00060: val_acc did not improve from 0.84600
Epoch 61/500
7500/7500 [==============================] - 5s 664us/step - loss: 0.0995 - acc: 0.8700 - val_loss: 0.1229 - val_acc: 0.8360
Epoch 00061: val_acc did not improve from 0.84600
Epoch 62/500
7500/7500 [==============================] - 5s 694us/step - loss: 0.0993 - acc: 0.8717 - val_loss: 0.1248 - val_acc: 0.8384
Epoch 00062: val_acc did not improve from 0.84600
Epoch 63/500
7500/7500 [==============================] - 5s 679us/step - loss: 0.0975 - acc: 0.8739 - val_loss: 0.1263 - val_acc: 0.8328
Epoch 00063: val_acc did not improve from 0.84600
Epoch 64/500
7500/7500 [==============================] - 5s 722us/step - loss: 0.0949 - acc: 0.8765 - val_loss: 0.1365 - val_acc: 0.8124
Epoch 00064: val_acc did not improve from 0.84600
Epoch 65/500
7500/7500 [==============================] - 5s 708us/step - loss: 0.0950 - acc: 0.8776 - val_loss: 0.1364 - val_acc: 0.8168
Epoch 00065: val_acc did not improve from 0.84600
Epoch 66/500
7500/7500 [==============================] - 5s 695us/step - loss: 0.0961 - acc: 0.8755 - val_loss: 0.1274 - val_acc: 0.8344
Epoch 00066: val_acc did not improve from 0.84600
Epoch 67/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.0930 - acc: 0.8809 - val_loss: 0.1314 - val_acc: 0.8368
Epoch 00067: val_acc did not improve from 0.84600
Epoch 68/500
7500/7500 [==============================] - 5s 671us/step - loss: 0.0929 - acc: 0.8792 - val_loss: 0.1286 - val_acc: 0.8280
Epoch 00068: val_acc did not improve from 0.84600
Epoch 69/500
7500/7500 [==============================] - 5s 670us/step - loss: 0.0906 - acc: 0.8833 - val_loss: 0.1342 - val_acc: 0.8288
Epoch 00069: val_acc did not improve from 0.84600
Epoch 70/500
7500/7500 [==============================] - 5s 687us/step - loss: 0.0906 - acc: 0.8815 - val_loss: 0.1328 - val_acc: 0.8208
Epoch 00070: val_acc did not improve from 0.84600
Epoch 71/500
7500/7500 [==============================] - 5s 662us/step - loss: 0.0884 - acc: 0.8863 - val_loss: 0.1300 - val_acc: 0.8316
Epoch 00071: val_acc did not improve from 0.84600
Epoch 72/500
7500/7500 [==============================] - 5s 681us/step - loss: 0.0893 - acc: 0.8865 - val_loss: 0.1374 - val_acc: 0.8128
Epoch 00072: val_acc did not improve from 0.84600
Epoch 73/500
7500/7500 [==============================] - 5s 669us/step - loss: 0.0865 - acc: 0.8905 - val_loss: 0.1324 - val_acc: 0.8168
Epoch 00073: val_acc did not improve from 0.84600
Epoch 74/500
7500/7500 [==============================] - 5s 668us/step - loss: 0.0898 - acc: 0.8872 - val_loss: 0.1345 - val_acc: 0.8204
Epoch 00074: val_acc did not improve from 0.84600
Epoch 75/500
7500/7500 [==============================] - 5s 662us/step - loss: 0.0844 - acc: 0.8956 - val_loss: 0.1428 - val_acc: 0.8180
Epoch 00075: val_acc did not improve from 0.84600
Epoch 76/500
7500/7500 [==============================] - 5s 675us/step - loss: 0.0840 - acc: 0.8955 - val_loss: 0.1374 - val_acc: 0.8272
Epoch 00076: val_acc did not improve from 0.84600
Epoch 77/500
7500/7500 [==============================] - 5s 684us/step - loss: 0.0818 - acc: 0.8959 - val_loss: 0.1405 - val_acc: 0.8212
Epoch 00077: val_acc did not improve from 0.84600
Epoch 78/500
7500/7500 [==============================] - 5s 658us/step - loss: 0.0818 - acc: 0.8991 - val_loss: 0.1362 - val_acc: 0.8296
Epoch 00078: val_acc did not improve from 0.84600
Epoch 79/500
7500/7500 [==============================] - 5s 667us/step - loss: 0.0821 - acc: 0.8969 - val_loss: 0.1396 - val_acc: 0.8240
Epoch 00079: val_acc did not improve from 0.84600
Epoch 80/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.0796 - acc: 0.9016 - val_loss: 0.1527 - val_acc: 0.8020
Epoch 00080: val_acc did not improve from 0.84600
Epoch 81/500
7500/7500 [==============================] - 5s 673us/step - loss: 0.0820 - acc: 0.8963 - val_loss: 0.1492 - val_acc: 0.8048
Epoch 00081: val_acc did not improve from 0.84600
Epoch 82/500
7500/7500 [==============================] - 5s 671us/step - loss: 0.0803 - acc: 0.9007 - val_loss: 0.1426 - val_acc: 0.8216
Epoch 00082: val_acc did not improve from 0.84600
Epoch 83/500
7500/7500 [==============================] - 5s 666us/step - loss: 0.0764 - acc: 0.9063 - val_loss: 0.1358 - val_acc: 0.8252
Epoch 00083: val_acc did not improve from 0.84600
Epoch 84/500
7500/7500 [==============================] - 5s 657us/step - loss: 0.0753 - acc: 0.9061 - val_loss: 0.1397 - val_acc: 0.8216
Epoch 00084: val_acc did not improve from 0.84600
Epoch 85/500
7500/7500 [==============================] - 5s 665us/step - loss: 0.0737 - acc: 0.9079 - val_loss: 0.1450 - val_acc: 0.8156
Epoch 00085: val_acc did not improve from 0.84600
Epoch 86/500
7500/7500 [==============================] - 5s 673us/step - loss: 0.0732 - acc: 0.9096 - val_loss: 0.1459 - val_acc: 0.8080
Epoch 00086: val_acc did not improve from 0.84600
Epoch 87/500
7500/7500 [==============================] - 5s 664us/step - loss: 0.0730 - acc: 0.9088 - val_loss: 0.1535 - val_acc: 0.8108
Epoch 00087: val_acc did not improve from 0.84600
Epoch 88/500
7500/7500 [==============================] - 5s 659us/step - loss: 0.0719 - acc: 0.9117 - val_loss: 0.1426 - val_acc: 0.8140
Epoch 00088: val_acc did not improve from 0.84600
Epoch 89/500
7500/7500 [==============================] - 5s 662us/step - loss: 0.0696 - acc: 0.9159 - val_loss: 0.1416 - val_acc: 0.8208
Epoch 00089: val_acc did not improve from 0.84600
Epoch 90/500
7500/7500 [==============================] - 5s 661us/step - loss: 0.0701 - acc: 0.9136 - val_loss: 0.1448 - val_acc: 0.8200
Epoch 00090: val_acc did not improve from 0.84600
Epoch 91/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0702 - acc: 0.9128 - val_loss: 0.1534 - val_acc: 0.8160
Epoch 00091: val_acc did not improve from 0.84600
Epoch 92/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0663 - acc: 0.9192 - val_loss: 0.1568 - val_acc: 0.8028
Epoch 00092: val_acc did not improve from 0.84600
Epoch 93/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.0660 - acc: 0.9175 - val_loss: 0.1468 - val_acc: 0.8176
Epoch 00093: val_acc did not improve from 0.84600
Epoch 94/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0643 - acc: 0.9219 - val_loss: 0.1532 - val_acc: 0.8096
Epoch 00094: val_acc did not improve from 0.84600
Epoch 95/500
7500/7500 [==============================] - 5s 657us/step - loss: 0.0652 - acc: 0.9207 - val_loss: 0.1494 - val_acc: 0.8100
Epoch 00095: val_acc did not improve from 0.84600
Epoch 96/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0634 - acc: 0.9224 - val_loss: 0.1468 - val_acc: 0.8192
Epoch 00096: val_acc did not improve from 0.84600
Epoch 97/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0642 - acc: 0.9215 - val_loss: 0.1455 - val_acc: 0.8192
Epoch 00097: val_acc did not improve from 0.84600
Epoch 98/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0608 - acc: 0.9260 - val_loss: 0.1606 - val_acc: 0.7996
Epoch 00098: val_acc did not improve from 0.84600
Epoch 99/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0608 - acc: 0.9275 - val_loss: 0.1490 - val_acc: 0.8120
Epoch 00099: val_acc did not improve from 0.84600
Epoch 100/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0606 - acc: 0.9252 - val_loss: 0.1508 - val_acc: 0.8160
Epoch 00100: val_acc did not improve from 0.84600
Epoch 101/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.0591 - acc: 0.9281 - val_loss: 0.1485 - val_acc: 0.8136
Epoch 00101: val_acc did not improve from 0.84600
Epoch 102/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0569 - acc: 0.9316 - val_loss: 0.1509 - val_acc: 0.8148
Epoch 00102: val_acc did not improve from 0.84600
Epoch 103/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0566 - acc: 0.9332 - val_loss: 0.1566 - val_acc: 0.8100
Epoch 00103: val_acc did not improve from 0.84600
Epoch 104/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0563 - acc: 0.9309 - val_loss: 0.1525 - val_acc: 0.8104
Epoch 00104: val_acc did not improve from 0.84600
Epoch 105/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0558 - acc: 0.9327 - val_loss: 0.1655 - val_acc: 0.8024
Epoch 00105: val_acc did not improve from 0.84600
Epoch 106/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0552 - acc: 0.9352 - val_loss: 0.1774 - val_acc: 0.7936
Epoch 00106: val_acc did not improve from 0.84600
Epoch 107/500
7500/7500 [==============================] - 5s 654us/step - loss: 0.0545 - acc: 0.9345 - val_loss: 0.1568 - val_acc: 0.8100
Epoch 00107: val_acc did not improve from 0.84600
Epoch 108/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0552 - acc: 0.9336 - val_loss: 0.1540 - val_acc: 0.8124
Epoch 00108: val_acc did not improve from 0.84600
Epoch 109/500
7500/7500 [==============================] - 5s 669us/step - loss: 0.0519 - acc: 0.9384 - val_loss: 0.1626 - val_acc: 0.8096
Epoch 00109: val_acc did not improve from 0.84600
Epoch 110/500
7500/7500 [==============================] - 5s 669us/step - loss: 0.0517 - acc: 0.9388 - val_loss: 0.1588 - val_acc: 0.8124
Epoch 00110: val_acc did not improve from 0.84600
Epoch 111/500
7500/7500 [==============================] - 5s 670us/step - loss: 0.0518 - acc: 0.9385 - val_loss: 0.1669 - val_acc: 0.8028
Epoch 00111: val_acc did not improve from 0.84600
Epoch 112/500
7500/7500 [==============================] - 5s 659us/step - loss: 0.0523 - acc: 0.9375 - val_loss: 0.1603 - val_acc: 0.8080
Epoch 00112: val_acc did not improve from 0.84600
Epoch 113/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.0530 - acc: 0.9372 - val_loss: 0.1615 - val_acc: 0.8044
Epoch 00113: val_acc did not improve from 0.84600
Epoch 114/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0487 - acc: 0.9429 - val_loss: 0.1583 - val_acc: 0.8104
Epoch 00114: val_acc did not improve from 0.84600
Epoch 115/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0499 - acc: 0.9425 - val_loss: 0.1616 - val_acc: 0.8052
Epoch 00115: val_acc did not improve from 0.84600
Epoch 116/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0480 - acc: 0.9453 - val_loss: 0.1607 - val_acc: 0.8104
Epoch 00116: val_acc did not improve from 0.84600
Epoch 117/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0478 - acc: 0.9444 - val_loss: 0.1741 - val_acc: 0.7844
Epoch 00117: val_acc did not improve from 0.84600
Epoch 118/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0468 - acc: 0.9451 - val_loss: 0.1607 - val_acc: 0.8064
Epoch 00118: val_acc did not improve from 0.84600
Epoch 119/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0462 - acc: 0.9461 - val_loss: 0.1635 - val_acc: 0.8012
Epoch 00119: val_acc did not improve from 0.84600
Epoch 120/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0450 - acc: 0.9489 - val_loss: 0.1610 - val_acc: 0.8076
Epoch 00120: val_acc did not improve from 0.84600
Epoch 121/500
7500/7500 [==============================] - 5s 660us/step - loss: 0.0437 - acc: 0.9501 - val_loss: 0.1553 - val_acc: 0.8156
Epoch 00121: val_acc did not improve from 0.84600
Epoch 122/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.0436 - acc: 0.9495 - val_loss: 0.1667 - val_acc: 0.8024
Epoch 00122: val_acc did not improve from 0.84600
Epoch 123/500
7500/7500 [==============================] - 5s 709us/step - loss: 0.0426 - acc: 0.9504 - val_loss: 0.1654 - val_acc: 0.8068
Epoch 00123: val_acc did not improve from 0.84600
Epoch 124/500
7500/7500 [==============================] - 5s 700us/step - loss: 0.0441 - acc: 0.9483 - val_loss: 0.1639 - val_acc: 0.8084
Epoch 00124: val_acc did not improve from 0.84600
Epoch 125/500
7500/7500 [==============================] - 5s 688us/step - loss: 0.0425 - acc: 0.9515 - val_loss: 0.1652 - val_acc: 0.8084
Epoch 00125: val_acc did not improve from 0.84600
Epoch 126/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0411 - acc: 0.9541 - val_loss: 0.1615 - val_acc: 0.8092
Epoch 00126: val_acc did not improve from 0.84600
Epoch 127/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0423 - acc: 0.9512 - val_loss: 0.1697 - val_acc: 0.8028
Epoch 00127: val_acc did not improve from 0.84600
Epoch 128/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0399 - acc: 0.9557 - val_loss: 0.1695 - val_acc: 0.8076
Epoch 00128: val_acc did not improve from 0.84600
Epoch 129/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0395 - acc: 0.9544 - val_loss: 0.1667 - val_acc: 0.8024
Epoch 00129: val_acc did not improve from 0.84600
Epoch 130/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0402 - acc: 0.9541 - val_loss: 0.1760 - val_acc: 0.8012
Epoch 00130: val_acc did not improve from 0.84600
Epoch 131/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0384 - acc: 0.9581 - val_loss: 0.1634 - val_acc: 0.8100
Epoch 00131: val_acc did not improve from 0.84600
Epoch 132/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0389 - acc: 0.9555 - val_loss: 0.1680 - val_acc: 0.8056
Epoch 00132: val_acc did not improve from 0.84600
Epoch 133/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0424 - acc: 0.9517 - val_loss: 0.1702 - val_acc: 0.8060
Epoch 00133: val_acc did not improve from 0.84600
Epoch 134/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0365 - acc: 0.9592 - val_loss: 0.1714 - val_acc: 0.8052
Epoch 00134: val_acc did not improve from 0.84600
Epoch 135/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0369 - acc: 0.9592 - val_loss: 0.1654 - val_acc: 0.8116
Epoch 00135: val_acc did not improve from 0.84600
Epoch 136/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0382 - acc: 0.9577 - val_loss: 0.1785 - val_acc: 0.7944
Epoch 00136: val_acc did not improve from 0.84600
Epoch 137/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0396 - acc: 0.9556 - val_loss: 0.1672 - val_acc: 0.8064
Epoch 00137: val_acc did not improve from 0.84600
Epoch 138/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0345 - acc: 0.9624 - val_loss: 0.1655 - val_acc: 0.8136
Epoch 00138: val_acc did not improve from 0.84600
Epoch 139/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0370 - acc: 0.9568 - val_loss: 0.1703 - val_acc: 0.7988
Epoch 00139: val_acc did not improve from 0.84600
Epoch 140/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0376 - acc: 0.9580 - val_loss: 0.1716 - val_acc: 0.8016
Epoch 00140: val_acc did not improve from 0.84600
Epoch 141/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0350 - acc: 0.9603 - val_loss: 0.1798 - val_acc: 0.7952
Epoch 00141: val_acc did not improve from 0.84600
Epoch 142/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0355 - acc: 0.9603 - val_loss: 0.1731 - val_acc: 0.8040
Epoch 00142: val_acc did not improve from 0.84600
Epoch 143/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0352 - acc: 0.9592 - val_loss: 0.1872 - val_acc: 0.7888
Epoch 00143: val_acc did not improve from 0.84600
Epoch 144/500
7500/7500 [==============================] - 5s 701us/step - loss: 0.0374 - acc: 0.9569 - val_loss: 0.1749 - val_acc: 0.8000
Epoch 00144: val_acc did not improve from 0.84600
Epoch 145/500
7500/7500 [==============================] - 5s 678us/step - loss: 0.0334 - acc: 0.9633 - val_loss: 0.1763 - val_acc: 0.8040
Epoch 00145: val_acc did not improve from 0.84600
Epoch 146/500
7500/7500 [==============================] - 5s 666us/step - loss: 0.0341 - acc: 0.9633 - val_loss: 0.1757 - val_acc: 0.8080
Epoch 00146: val_acc did not improve from 0.84600
Epoch 147/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.0383 - acc: 0.9577 - val_loss: 0.1810 - val_acc: 0.7952
Epoch 00147: val_acc did not improve from 0.84600
Epoch 148/500
7500/7500 [==============================] - 5s 673us/step - loss: 0.0365 - acc: 0.9588 - val_loss: 0.1694 - val_acc: 0.8052
Epoch 00148: val_acc did not improve from 0.84600
Epoch 149/500
7500/7500 [==============================] - 5s 719us/step - loss: 0.0342 - acc: 0.9615 - val_loss: 0.1848 - val_acc: 0.7908
Epoch 00149: val_acc did not improve from 0.84600
Epoch 150/500
7500/7500 [==============================] - 6s 780us/step - loss: 0.0345 - acc: 0.9613 - val_loss: 0.1774 - val_acc: 0.7992
Epoch 00150: val_acc did not improve from 0.84600
Epoch 151/500
7500/7500 [==============================] - 6s 791us/step - loss: 0.0328 - acc: 0.9635 - val_loss: 0.1721 - val_acc: 0.8028
Epoch 00151: val_acc did not improve from 0.84600
Epoch 152/500
7500/7500 [==============================] - 5s 706us/step - loss: 0.0364 - acc: 0.9596 - val_loss: 0.1732 - val_acc: 0.8040
Epoch 00152: val_acc did not improve from 0.84600
Epoch 153/500
7500/7500 [==============================] - 5s 715us/step - loss: 0.0332 - acc: 0.9631 - val_loss: 0.1768 - val_acc: 0.7980
Epoch 00153: val_acc did not improve from 0.84600
Epoch 154/500
7500/7500 [==============================] - 6s 830us/step - loss: 0.0323 - acc: 0.9644 - val_loss: 0.1791 - val_acc: 0.7976
Epoch 00154: val_acc did not improve from 0.84600
Epoch 155/500
7500/7500 [==============================] - 7s 883us/step - loss: 0.0319 - acc: 0.9647 - val_loss: 0.1761 - val_acc: 0.8044
Epoch 00155: val_acc did not improve from 0.84600
Epoch 156/500
7500/7500 [==============================] - 5s 691us/step - loss: 0.0332 - acc: 0.9640 - val_loss: 0.1730 - val_acc: 0.8048
Epoch 00156: val_acc did not improve from 0.84600
Epoch 157/500
7500/7500 [==============================] - 5s 692us/step - loss: 0.0319 - acc: 0.9644 - val_loss: 0.1709 - val_acc: 0.8080
Epoch 00157: val_acc did not improve from 0.84600
Epoch 158/500
7500/7500 [==============================] - 6s 808us/step - loss: 0.0309 - acc: 0.9660 - val_loss: 0.1719 - val_acc: 0.8048
Epoch 00158: val_acc did not improve from 0.84600
Epoch 159/500
7500/7500 [==============================] - 6s 787us/step - loss: 0.0331 - acc: 0.9644 - val_loss: 0.1746 - val_acc: 0.8040
Epoch 00159: val_acc did not improve from 0.84600
Epoch 160/500
7500/7500 [==============================] - 6s 735us/step - loss: 0.0328 - acc: 0.9643 - val_loss: 0.1740 - val_acc: 0.8032
Epoch 00160: val_acc did not improve from 0.84600
Epoch 161/500
7500/7500 [==============================] - 7s 872us/step - loss: 0.0296 - acc: 0.9673 - val_loss: 0.1822 - val_acc: 0.7980
Epoch 00161: val_acc did not improve from 0.84600
Epoch 162/500
7500/7500 [==============================] - 5s 711us/step - loss: 0.0327 - acc: 0.9637 - val_loss: 0.1735 - val_acc: 0.8024
Epoch 00162: val_acc did not improve from 0.84600
Epoch 163/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0307 - acc: 0.9664 - val_loss: 0.1791 - val_acc: 0.7968
Epoch 00163: val_acc did not improve from 0.84600
Epoch 164/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0313 - acc: 0.9643 - val_loss: 0.1734 - val_acc: 0.8080
Epoch 00164: val_acc did not improve from 0.84600
Epoch 165/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0314 - acc: 0.9655 - val_loss: 0.1837 - val_acc: 0.7940
Epoch 00165: val_acc did not improve from 0.84600
Epoch 166/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0336 - acc: 0.9621 - val_loss: 0.1715 - val_acc: 0.8104
Epoch 00166: val_acc did not improve from 0.84600
Epoch 167/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0286 - acc: 0.9679 - val_loss: 0.1726 - val_acc: 0.8064
Epoch 00167: val_acc did not improve from 0.84600
Epoch 168/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.0305 - acc: 0.9667 - val_loss: 0.1762 - val_acc: 0.8020
Epoch 00168: val_acc did not improve from 0.84600
Epoch 169/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0285 - acc: 0.9692 - val_loss: 0.1801 - val_acc: 0.8004
Epoch 00169: val_acc did not improve from 0.84600
Epoch 170/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0296 - acc: 0.9677 - val_loss: 0.1774 - val_acc: 0.8028
Epoch 00170: val_acc did not improve from 0.84600
Epoch 171/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0311 - acc: 0.9659 - val_loss: 0.1814 - val_acc: 0.7964
Epoch 00171: val_acc did not improve from 0.84600
Epoch 172/500
7500/7500 [==============================] - 5s 658us/step - loss: 0.0289 - acc: 0.9689 - val_loss: 0.1724 - val_acc: 0.8040
Epoch 00172: val_acc did not improve from 0.84600
Epoch 173/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0290 - acc: 0.9691 - val_loss: 0.1689 - val_acc: 0.8124
Epoch 00173: val_acc did not improve from 0.84600
Epoch 174/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0301 - acc: 0.9679 - val_loss: 0.1758 - val_acc: 0.8016
Epoch 00174: val_acc did not improve from 0.84600
Epoch 175/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.0278 - acc: 0.9693 - val_loss: 0.1801 - val_acc: 0.8028
Epoch 00175: val_acc did not improve from 0.84600
Epoch 176/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0286 - acc: 0.9699 - val_loss: 0.1800 - val_acc: 0.7980
Epoch 00176: val_acc did not improve from 0.84600
Epoch 177/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0278 - acc: 0.9691 - val_loss: 0.1769 - val_acc: 0.8012
Epoch 00177: val_acc did not improve from 0.84600
Epoch 178/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.0281 - acc: 0.9692 - val_loss: 0.1892 - val_acc: 0.7912
Epoch 00178: val_acc did not improve from 0.84600
Epoch 179/500
7500/7500 [==============================] - 5s 696us/step - loss: 0.0270 - acc: 0.9700 - val_loss: 0.1791 - val_acc: 0.7996
Epoch 00179: val_acc did not improve from 0.84600
Epoch 180/500
7500/7500 [==============================] - 5s 658us/step - loss: 0.0275 - acc: 0.9704 - val_loss: 0.1872 - val_acc: 0.7904
Epoch 00180: val_acc did not improve from 0.84600
Epoch 181/500
7500/7500 [==============================] - 5s 666us/step - loss: 0.0282 - acc: 0.9696 - val_loss: 0.1777 - val_acc: 0.8064
Epoch 00181: val_acc did not improve from 0.84600
Epoch 182/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.0280 - acc: 0.9692 - val_loss: 0.1793 - val_acc: 0.8004
Epoch 00182: val_acc did not improve from 0.84600
Epoch 183/500
7500/7500 [==============================] - 5s 669us/step - loss: 0.0271 - acc: 0.9699 - val_loss: 0.1815 - val_acc: 0.8012
Epoch 00183: val_acc did not improve from 0.84600
Epoch 184/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0288 - acc: 0.9691 - val_loss: 0.1801 - val_acc: 0.8028
Epoch 00184: val_acc did not improve from 0.84600
Epoch 185/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0280 - acc: 0.9699 - val_loss: 0.1795 - val_acc: 0.8020
Epoch 00185: val_acc did not improve from 0.84600
Epoch 186/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0287 - acc: 0.9684 - val_loss: 0.1861 - val_acc: 0.7980
Epoch 00186: val_acc did not improve from 0.84600
Epoch 187/500
7500/7500 [==============================] - 5s 657us/step - loss: 0.0260 - acc: 0.9716 - val_loss: 0.1761 - val_acc: 0.8112
Epoch 00187: val_acc did not improve from 0.84600
Epoch 188/500
7500/7500 [==============================] - 5s 662us/step - loss: 0.0296 - acc: 0.9679 - val_loss: 0.1892 - val_acc: 0.7932
Epoch 00188: val_acc did not improve from 0.84600
Epoch 189/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0299 - acc: 0.9657 - val_loss: 0.1745 - val_acc: 0.8080
Epoch 00189: val_acc did not improve from 0.84600
Epoch 190/500
7500/7500 [==============================] - 5s 657us/step - loss: 0.0265 - acc: 0.9704 - val_loss: 0.1807 - val_acc: 0.7968
Epoch 00190: val_acc did not improve from 0.84600
Epoch 191/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0256 - acc: 0.9716 - val_loss: 0.1792 - val_acc: 0.8052
Epoch 00191: val_acc did not improve from 0.84600
Epoch 192/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0263 - acc: 0.9711 - val_loss: 0.1762 - val_acc: 0.8036
Epoch 00192: val_acc did not improve from 0.84600
Epoch 193/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.0274 - acc: 0.9695 - val_loss: 0.1849 - val_acc: 0.7956
Epoch 00193: val_acc did not improve from 0.84600
Epoch 194/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.0253 - acc: 0.9729 - val_loss: 0.1807 - val_acc: 0.8040
Epoch 00194: val_acc did not improve from 0.84600
Epoch 195/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.0245 - acc: 0.9735 - val_loss: 0.1757 - val_acc: 0.8064
Epoch 00195: val_acc did not improve from 0.84600
Epoch 196/500
7500/7500 [==============================] - 5s 714us/step - loss: 0.0287 - acc: 0.9681 - val_loss: 0.1806 - val_acc: 0.7984
Epoch 00196: val_acc did not improve from 0.84600
Epoch 197/500
7500/7500 [==============================] - 5s 721us/step - loss: 0.0256 - acc: 0.9721 - val_loss: 0.1830 - val_acc: 0.8000
Epoch 00197: val_acc did not improve from 0.84600
Epoch 198/500
7500/7500 [==============================] - 5s 685us/step - loss: 0.0277 - acc: 0.9699 - val_loss: 0.1777 - val_acc: 0.8028
Epoch 00198: val_acc did not improve from 0.84600
Epoch 199/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.0248 - acc: 0.9724 - val_loss: 0.1807 - val_acc: 0.7984
Epoch 00199: val_acc did not improve from 0.84600
Epoch 200/500
7500/7500 [==============================] - 5s 660us/step - loss: 0.0250 - acc: 0.9729 - val_loss: 0.1744 - val_acc: 0.8064
Epoch 00200: val_acc did not improve from 0.84600
Epoch 201/500
7500/7500 [==============================] - 5s 680us/step - loss: 0.0246 - acc: 0.9729 - val_loss: 0.1795 - val_acc: 0.8028
Epoch 00201: val_acc did not improve from 0.84600
Epoch 202/500
7500/7500 [==============================] - 5s 668us/step - loss: 0.0254 - acc: 0.9723 - val_loss: 0.1823 - val_acc: 0.8000
Epoch 00202: val_acc did not improve from 0.84600
Epoch 203/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.0240 - acc: 0.9744 - val_loss: 0.1800 - val_acc: 0.8016
Epoch 00203: val_acc did not improve from 0.84600
Epoch 204/500
7500/7500 [==============================] - 5s 652us/step - loss: 0.0284 - acc: 0.9692 - val_loss: 0.1859 - val_acc: 0.7964
Epoch 00204: val_acc did not improve from 0.84600
Epoch 205/500
7500/7500 [==============================] - 5s 654us/step - loss: 0.0250 - acc: 0.9731 - val_loss: 0.1736 - val_acc: 0.8088
Epoch 00205: val_acc did not improve from 0.84600
Epoch 206/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.0257 - acc: 0.9716 - val_loss: 0.1794 - val_acc: 0.8040
Epoch 00206: val_acc did not improve from 0.84600
Epoch 207/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.0252 - acc: 0.9720 - val_loss: 0.1792 - val_acc: 0.8024
Epoch 00207: val_acc did not improve from 0.84600
Epoch 208/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.0240 - acc: 0.9737 - val_loss: 0.1874 - val_acc: 0.7956
Epoch 00208: val_acc did not improve from 0.84600
Epoch 209/500
7500/7500 [==============================] - 5s 660us/step - loss: 0.0240 - acc: 0.9740 - val_loss: 0.1852 - val_acc: 0.7956
Epoch 00209: val_acc did not improve from 0.84600
Epoch 210/500
7500/7500 [==============================] - 5s 654us/step - loss: 0.0241 - acc: 0.9740 - val_loss: 0.1806 - val_acc: 0.8012
Epoch 00210: val_acc did not improve from 0.84600
Epoch 211/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0238 - acc: 0.9744 - val_loss: 0.1870 - val_acc: 0.7996
Epoch 00211: val_acc did not improve from 0.84600
Epoch 212/500
7500/7500 [==============================] - 5s 659us/step - loss: 0.0255 - acc: 0.9712 - val_loss: 0.1857 - val_acc: 0.7976
Epoch 00212: val_acc did not improve from 0.84600
Epoch 213/500
7500/7500 [==============================] - 5s 649us/step - loss: 0.0232 - acc: 0.9752 - val_loss: 0.1817 - val_acc: 0.8004
Epoch 00213: val_acc did not improve from 0.84600
Epoch 214/500
7500/7500 [==============================] - 5s 647us/step - loss: 0.0246 - acc: 0.9732 - val_loss: 0.1806 - val_acc: 0.8052
Epoch 00214: val_acc did not improve from 0.84600
Epoch 215/500
7500/7500 [==============================] - 5s 673us/step - loss: 0.0241 - acc: 0.9737 - val_loss: 0.1842 - val_acc: 0.7992
Epoch 00215: val_acc did not improve from 0.84600
Epoch 216/500
7500/7500 [==============================] - 5s 657us/step - loss: 0.0244 - acc: 0.9743 - val_loss: 0.1927 - val_acc: 0.7912
Epoch 00216: val_acc did not improve from 0.84600
Epoch 217/500
7500/7500 [==============================] - 5s 656us/step - loss: 0.0229 - acc: 0.9763 - val_loss: 0.1863 - val_acc: 0.7956
Epoch 00217: val_acc did not improve from 0.84600
Epoch 218/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0245 - acc: 0.9739 - val_loss: 0.1835 - val_acc: 0.7988
Epoch 00218: val_acc did not improve from 0.84600
Epoch 219/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.0250 - acc: 0.9729 - val_loss: 0.1848 - val_acc: 0.7980
Epoch 00219: val_acc did not improve from 0.84600
Epoch 220/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0269 - acc: 0.9697 - val_loss: 0.1806 - val_acc: 0.8020
Epoch 00220: val_acc did not improve from 0.84600
Epoch 221/500
7500/7500 [==============================] - 5s 651us/step - loss: 0.0257 - acc: 0.9721 - val_loss: 0.1787 - val_acc: 0.8048
Epoch 00221: val_acc did not improve from 0.84600
Epoch 222/500
7500/7500 [==============================] - 5s 684us/step - loss: 0.0225 - acc: 0.9761 - val_loss: 0.1814 - val_acc: 0.8012
Epoch 00222: val_acc did not improve from 0.84600
Epoch 223/500
7500/7500 [==============================] - 5s 721us/step - loss: 0.0245 - acc: 0.9731 - val_loss: 0.1845 - val_acc: 0.7992
Epoch 00223: val_acc did not improve from 0.84600
Epoch 224/500
7500/7500 [==============================] - 5s 713us/step - loss: 0.0235 - acc: 0.9745 - val_loss: 0.1778 - val_acc: 0.8096
Epoch 00224: val_acc did not improve from 0.84600
Epoch 225/500
7500/7500 [==============================] - 5s 697us/step - loss: 0.0225 - acc: 0.9751 - val_loss: 0.1828 - val_acc: 0.8004
Epoch 00225: val_acc did not improve from 0.84600
Epoch 226/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.0262 - acc: 0.9719 - val_loss: 0.1819 - val_acc: 0.8036
Epoch 00226: val_acc did not improve from 0.84600
Epoch 227/500
7500/7500 [==============================] - 5s 663us/step - loss: 0.0233 - acc: 0.9756 - val_loss: 0.1782 - val_acc: 0.8040
Epoch 00227: val_acc did not improve from 0.84600
Epoch 228/500
7500/7500 [==============================] - 5s 666us/step - loss: 0.0226 - acc: 0.9761 - val_loss: 0.1845 - val_acc: 0.7936
Epoch 00228: val_acc did not improve from 0.84600
Epoch 229/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0244 - acc: 0.9736 - val_loss: 0.1829 - val_acc: 0.8012
Epoch 00229: val_acc did not improve from 0.84600
Epoch 230/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0229 - acc: 0.9761 - val_loss: 0.1805 - val_acc: 0.8000
Epoch 00230: val_acc did not improve from 0.84600
Epoch 231/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0222 - acc: 0.9768 - val_loss: 0.1786 - val_acc: 0.8084
Epoch 00231: val_acc did not improve from 0.84600
Epoch 232/500
7500/7500 [==============================] - 5s 660us/step - loss: 0.0222 - acc: 0.9771 - val_loss: 0.1815 - val_acc: 0.8016
Epoch 00232: val_acc did not improve from 0.84600
Epoch 233/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0243 - acc: 0.9745 - val_loss: 0.1815 - val_acc: 0.8044
Epoch 00233: val_acc did not improve from 0.84600
Epoch 234/500
7500/7500 [==============================] - 5s 668us/step - loss: 0.0229 - acc: 0.9745 - val_loss: 0.1953 - val_acc: 0.7868
Epoch 00234: val_acc did not improve from 0.84600
Epoch 235/500
7500/7500 [==============================] - 5s 665us/step - loss: 0.0242 - acc: 0.9748 - val_loss: 0.1833 - val_acc: 0.8020
Epoch 00235: val_acc did not improve from 0.84600
Epoch 236/500
7500/7500 [==============================] - 5s 665us/step - loss: 0.0220 - acc: 0.9772 - val_loss: 0.1824 - val_acc: 0.8020
Epoch 00236: val_acc did not improve from 0.84600
Epoch 237/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.0231 - acc: 0.9749 - val_loss: 0.1848 - val_acc: 0.7964
Epoch 00237: val_acc did not improve from 0.84600
Epoch 238/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0243 - acc: 0.9735 - val_loss: 0.1848 - val_acc: 0.8020
Epoch 00238: val_acc did not improve from 0.84600
Epoch 239/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0229 - acc: 0.9756 - val_loss: 0.1871 - val_acc: 0.7936
Epoch 00239: val_acc did not improve from 0.84600
Epoch 240/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0222 - acc: 0.9765 - val_loss: 0.1838 - val_acc: 0.7984
Epoch 00240: val_acc did not improve from 0.84600
Epoch 241/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0227 - acc: 0.9757 - val_loss: 0.1872 - val_acc: 0.7992
Epoch 00241: val_acc did not improve from 0.84600
Epoch 242/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0227 - acc: 0.9756 - val_loss: 0.1794 - val_acc: 0.8004
Epoch 00242: val_acc did not improve from 0.84600
Epoch 243/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0213 - acc: 0.9777 - val_loss: 0.1876 - val_acc: 0.7972
Epoch 00243: val_acc did not improve from 0.84600
Epoch 244/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0215 - acc: 0.9775 - val_loss: 0.1910 - val_acc: 0.7924
Epoch 00244: val_acc did not improve from 0.84600
Epoch 245/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0215 - acc: 0.9767 - val_loss: 0.1883 - val_acc: 0.7968
Epoch 00245: val_acc did not improve from 0.84600
Epoch 246/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0215 - acc: 0.9767 - val_loss: 0.1980 - val_acc: 0.7876
Epoch 00246: val_acc did not improve from 0.84600
Epoch 247/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0201 - acc: 0.9793 - val_loss: 0.1848 - val_acc: 0.8024
Epoch 00247: val_acc did not improve from 0.84600
Epoch 248/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0230 - acc: 0.9760 - val_loss: 0.1851 - val_acc: 0.7996
Epoch 00248: val_acc did not improve from 0.84600
Epoch 249/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0220 - acc: 0.9771 - val_loss: 0.1867 - val_acc: 0.8004
Epoch 00249: val_acc did not improve from 0.84600
Epoch 250/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0213 - acc: 0.9776 - val_loss: 0.1871 - val_acc: 0.7956
Epoch 00250: val_acc did not improve from 0.84600
Epoch 251/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0223 - acc: 0.9760 - val_loss: 0.1798 - val_acc: 0.8060
Epoch 00251: val_acc did not improve from 0.84600
Epoch 252/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0220 - acc: 0.9769 - val_loss: 0.1847 - val_acc: 0.7968
Epoch 00252: val_acc did not improve from 0.84600
Epoch 253/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0218 - acc: 0.9768 - val_loss: 0.1833 - val_acc: 0.7972
Epoch 00253: val_acc did not improve from 0.84600
Epoch 254/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0212 - acc: 0.9777 - val_loss: 0.1821 - val_acc: 0.8056
Epoch 00254: val_acc did not improve from 0.84600
Epoch 255/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.0213 - acc: 0.9773 - val_loss: 0.1853 - val_acc: 0.7964
Epoch 00255: val_acc did not improve from 0.84600
Epoch 256/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0235 - acc: 0.9747 - val_loss: 0.1822 - val_acc: 0.8008
Epoch 00256: val_acc did not improve from 0.84600
Epoch 257/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0195 - acc: 0.9800 - val_loss: 0.1908 - val_acc: 0.7948
Epoch 00257: val_acc did not improve from 0.84600
Epoch 258/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0213 - acc: 0.9783 - val_loss: 0.1841 - val_acc: 0.8016
Epoch 00258: val_acc did not improve from 0.84600
Epoch 259/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0234 - acc: 0.9747 - val_loss: 0.1837 - val_acc: 0.8028
Epoch 00259: val_acc did not improve from 0.84600
Epoch 260/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0225 - acc: 0.9760 - val_loss: 0.1893 - val_acc: 0.7980
Epoch 00260: val_acc did not improve from 0.84600
Epoch 261/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0217 - acc: 0.9771 - val_loss: 0.1844 - val_acc: 0.8000
Epoch 00261: val_acc did not improve from 0.84600
Epoch 262/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0226 - acc: 0.9763 - val_loss: 0.1840 - val_acc: 0.8032
Epoch 00262: val_acc did not improve from 0.84600
Epoch 263/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0224 - acc: 0.9761 - val_loss: 0.1836 - val_acc: 0.8004
Epoch 00263: val_acc did not improve from 0.84600
Epoch 264/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0230 - acc: 0.9753 - val_loss: 0.1819 - val_acc: 0.8040
Epoch 00264: val_acc did not improve from 0.84600
Epoch 265/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0211 - acc: 0.9785 - val_loss: 0.1891 - val_acc: 0.7984
Epoch 00265: val_acc did not improve from 0.84600
Epoch 266/500
7500/7500 [==============================] - 5s 648us/step - loss: 0.0213 - acc: 0.9783 - val_loss: 0.1865 - val_acc: 0.8012
Epoch 00266: val_acc did not improve from 0.84600
Epoch 267/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0212 - acc: 0.9772 - val_loss: 0.1886 - val_acc: 0.7984
Epoch 00267: val_acc did not improve from 0.84600
Epoch 268/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0217 - acc: 0.9771 - val_loss: 0.1813 - val_acc: 0.7984
Epoch 00268: val_acc did not improve from 0.84600
Epoch 269/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0207 - acc: 0.9784 - val_loss: 0.1836 - val_acc: 0.8008
Epoch 00269: val_acc did not improve from 0.84600
Epoch 270/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0221 - acc: 0.9772 - val_loss: 0.1711 - val_acc: 0.8200
Epoch 00270: val_acc did not improve from 0.84600
Epoch 271/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0203 - acc: 0.9791 - val_loss: 0.1800 - val_acc: 0.8064
Epoch 00271: val_acc did not improve from 0.84600
Epoch 272/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0196 - acc: 0.9803 - val_loss: 0.1817 - val_acc: 0.8028
Epoch 00272: val_acc did not improve from 0.84600
Epoch 273/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0203 - acc: 0.9787 - val_loss: 0.1884 - val_acc: 0.7984
Epoch 00273: val_acc did not improve from 0.84600
Epoch 274/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0208 - acc: 0.9784 - val_loss: 0.1779 - val_acc: 0.8084
Epoch 00274: val_acc did not improve from 0.84600
Epoch 275/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0202 - acc: 0.9787 - val_loss: 0.1840 - val_acc: 0.8084
Epoch 00275: val_acc did not improve from 0.84600
Epoch 276/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9779 - val_loss: 0.1832 - val_acc: 0.8036
Epoch 00276: val_acc did not improve from 0.84600
Epoch 277/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0215 - acc: 0.9768 - val_loss: 0.1786 - val_acc: 0.8092
Epoch 00277: val_acc did not improve from 0.84600
Epoch 278/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0205 - acc: 0.9787 - val_loss: 0.1822 - val_acc: 0.8040
Epoch 00278: val_acc did not improve from 0.84600
Epoch 279/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9791 - val_loss: 0.1827 - val_acc: 0.7992
Epoch 00279: val_acc did not improve from 0.84600
Epoch 280/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0211 - acc: 0.9779 - val_loss: 0.1806 - val_acc: 0.8044
Epoch 00280: val_acc did not improve from 0.84600
Epoch 281/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0198 - acc: 0.9795 - val_loss: 0.1790 - val_acc: 0.8100
Epoch 00281: val_acc did not improve from 0.84600
Epoch 282/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0221 - acc: 0.9773 - val_loss: 0.1820 - val_acc: 0.8016
Epoch 00282: val_acc did not improve from 0.84600
Epoch 283/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0210 - acc: 0.9779 - val_loss: 0.1792 - val_acc: 0.8096
Epoch 00283: val_acc did not improve from 0.84600
Epoch 284/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0206 - acc: 0.9777 - val_loss: 0.1793 - val_acc: 0.8072
Epoch 00284: val_acc did not improve from 0.84600
Epoch 285/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0204 - acc: 0.9788 - val_loss: 0.1822 - val_acc: 0.8028
Epoch 00285: val_acc did not improve from 0.84600
Epoch 286/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0219 - acc: 0.9768 - val_loss: 0.1849 - val_acc: 0.8020
Epoch 00286: val_acc did not improve from 0.84600
Epoch 287/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0199 - acc: 0.9791 - val_loss: 0.1804 - val_acc: 0.8092
Epoch 00287: val_acc did not improve from 0.84600
Epoch 288/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0198 - acc: 0.9793 - val_loss: 0.1922 - val_acc: 0.7960
Epoch 00288: val_acc did not improve from 0.84600
Epoch 289/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0218 - acc: 0.9771 - val_loss: 0.1848 - val_acc: 0.8008
Epoch 00289: val_acc did not improve from 0.84600
Epoch 290/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0209 - acc: 0.9783 - val_loss: 0.1808 - val_acc: 0.8044
Epoch 00290: val_acc did not improve from 0.84600
Epoch 291/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0210 - acc: 0.9773 - val_loss: 0.1805 - val_acc: 0.8076
Epoch 00291: val_acc did not improve from 0.84600
Epoch 292/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9787 - val_loss: 0.1846 - val_acc: 0.8000
Epoch 00292: val_acc did not improve from 0.84600
Epoch 293/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0205 - acc: 0.9783 - val_loss: 0.1906 - val_acc: 0.7936
Epoch 00293: val_acc did not improve from 0.84600
Epoch 294/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0192 - acc: 0.9804 - val_loss: 0.1862 - val_acc: 0.8016
Epoch 00294: val_acc did not improve from 0.84600
Epoch 295/500
7500/7500 [==============================] - 5s 645us/step - loss: 0.0202 - acc: 0.9791 - val_loss: 0.1802 - val_acc: 0.8060
Epoch 00295: val_acc did not improve from 0.84600
Epoch 296/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9776 - val_loss: 0.1866 - val_acc: 0.8000
Epoch 00296: val_acc did not improve from 0.84600
Epoch 297/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0202 - acc: 0.9791 - val_loss: 0.1783 - val_acc: 0.8092
Epoch 00297: val_acc did not improve from 0.84600
Epoch 298/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0189 - acc: 0.9803 - val_loss: 0.1804 - val_acc: 0.8080
Epoch 00298: val_acc did not improve from 0.84600
Epoch 299/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0205 - acc: 0.9780 - val_loss: 0.1825 - val_acc: 0.8016
Epoch 00299: val_acc did not improve from 0.84600
Epoch 300/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0214 - acc: 0.9773 - val_loss: 0.1839 - val_acc: 0.8012
Epoch 00300: val_acc did not improve from 0.84600
Epoch 301/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0200 - acc: 0.9783 - val_loss: 0.1836 - val_acc: 0.8036
Epoch 00301: val_acc did not improve from 0.84600
Epoch 302/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0202 - acc: 0.9788 - val_loss: 0.1854 - val_acc: 0.8052
Epoch 00302: val_acc did not improve from 0.84600
Epoch 303/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0195 - acc: 0.9797 - val_loss: 0.1833 - val_acc: 0.8048
Epoch 00303: val_acc did not improve from 0.84600
Epoch 304/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0200 - acc: 0.9791 - val_loss: 0.1858 - val_acc: 0.8004
Epoch 00304: val_acc did not improve from 0.84600
Epoch 305/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0210 - acc: 0.9779 - val_loss: 0.1840 - val_acc: 0.8044
Epoch 00305: val_acc did not improve from 0.84600
Epoch 306/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0190 - acc: 0.9803 - val_loss: 0.1796 - val_acc: 0.8080
Epoch 00306: val_acc did not improve from 0.84600
Epoch 307/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0208 - acc: 0.9776 - val_loss: 0.1860 - val_acc: 0.7996
Epoch 00307: val_acc did not improve from 0.84600
Epoch 308/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0183 - acc: 0.9812 - val_loss: 0.1894 - val_acc: 0.7960
Epoch 00308: val_acc did not improve from 0.84600
Epoch 309/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0199 - acc: 0.9787 - val_loss: 0.1833 - val_acc: 0.8052
Epoch 00309: val_acc did not improve from 0.84600
Epoch 310/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0197 - acc: 0.9791 - val_loss: 0.1868 - val_acc: 0.7984
Epoch 00310: val_acc did not improve from 0.84600
Epoch 311/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0189 - acc: 0.9805 - val_loss: 0.1883 - val_acc: 0.8000
Epoch 00311: val_acc did not improve from 0.84600
Epoch 312/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9783 - val_loss: 0.1832 - val_acc: 0.8036
Epoch 00312: val_acc did not improve from 0.84600
Epoch 313/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0191 - acc: 0.9801 - val_loss: 0.1843 - val_acc: 0.8004
Epoch 00313: val_acc did not improve from 0.84600
Epoch 314/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0189 - acc: 0.9805 - val_loss: 0.1780 - val_acc: 0.8088
Epoch 00314: val_acc did not improve from 0.84600
Epoch 315/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1871 - val_acc: 0.8004
Epoch 00315: val_acc did not improve from 0.84600
Epoch 316/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0202 - acc: 0.9785 - val_loss: 0.1897 - val_acc: 0.7948
Epoch 00316: val_acc did not improve from 0.84600
Epoch 317/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0197 - acc: 0.9795 - val_loss: 0.1841 - val_acc: 0.8016
Epoch 00317: val_acc did not improve from 0.84600
Epoch 318/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0193 - acc: 0.9799 - val_loss: 0.1841 - val_acc: 0.8040
Epoch 00318: val_acc did not improve from 0.84600
Epoch 319/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0215 - acc: 0.9775 - val_loss: 0.1829 - val_acc: 0.8060
Epoch 00319: val_acc did not improve from 0.84600
Epoch 320/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0211 - acc: 0.9781 - val_loss: 0.1874 - val_acc: 0.8016
Epoch 00320: val_acc did not improve from 0.84600
Epoch 321/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9784 - val_loss: 0.1862 - val_acc: 0.7984
Epoch 00321: val_acc did not improve from 0.84600
Epoch 322/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0198 - acc: 0.9789 - val_loss: 0.1906 - val_acc: 0.8000
Epoch 00322: val_acc did not improve from 0.84600
Epoch 323/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0190 - acc: 0.9796 - val_loss: 0.1890 - val_acc: 0.7992
Epoch 00323: val_acc did not improve from 0.84600
Epoch 324/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0183 - acc: 0.9808 - val_loss: 0.1905 - val_acc: 0.7996
Epoch 00324: val_acc did not improve from 0.84600
Epoch 325/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0188 - acc: 0.9807 - val_loss: 0.1797 - val_acc: 0.8068
Epoch 00325: val_acc did not improve from 0.84600
Epoch 326/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0190 - acc: 0.9803 - val_loss: 0.1876 - val_acc: 0.7980
Epoch 00326: val_acc did not improve from 0.84600
Epoch 327/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0192 - acc: 0.9796 - val_loss: 0.1882 - val_acc: 0.7968
Epoch 00327: val_acc did not improve from 0.84600
Epoch 328/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0193 - acc: 0.9801 - val_loss: 0.1841 - val_acc: 0.8024
Epoch 00328: val_acc did not improve from 0.84600
Epoch 329/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0182 - acc: 0.9811 - val_loss: 0.1814 - val_acc: 0.8040
Epoch 00329: val_acc did not improve from 0.84600
Epoch 330/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1899 - val_acc: 0.7988
Epoch 00330: val_acc did not improve from 0.84600
Epoch 331/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0202 - acc: 0.9797 - val_loss: 0.1869 - val_acc: 0.8004
Epoch 00331: val_acc did not improve from 0.84600
Epoch 332/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9809 - val_loss: 0.1874 - val_acc: 0.7984
Epoch 00332: val_acc did not improve from 0.84600
Epoch 333/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0176 - acc: 0.9816 - val_loss: 0.1782 - val_acc: 0.8092
Epoch 00333: val_acc did not improve from 0.84600
Epoch 334/500
7500/7500 [==============================] - 5s 642us/step - loss: 0.0195 - acc: 0.9793 - val_loss: 0.1795 - val_acc: 0.8100
Epoch 00334: val_acc did not improve from 0.84600
Epoch 335/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0180 - acc: 0.9808 - val_loss: 0.1855 - val_acc: 0.8032
Epoch 00335: val_acc did not improve from 0.84600
Epoch 336/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0200 - acc: 0.9789 - val_loss: 0.1875 - val_acc: 0.8008
Epoch 00336: val_acc did not improve from 0.84600
Epoch 337/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0197 - acc: 0.9789 - val_loss: 0.1881 - val_acc: 0.7960
Epoch 00337: val_acc did not improve from 0.84600
Epoch 338/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0181 - acc: 0.9811 - val_loss: 0.1873 - val_acc: 0.7976
Epoch 00338: val_acc did not improve from 0.84600
Epoch 339/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0180 - acc: 0.9809 - val_loss: 0.1907 - val_acc: 0.7952
Epoch 00339: val_acc did not improve from 0.84600
Epoch 340/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0196 - acc: 0.9791 - val_loss: 0.1836 - val_acc: 0.8044
Epoch 00340: val_acc did not improve from 0.84600
Epoch 341/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0186 - acc: 0.9807 - val_loss: 0.1777 - val_acc: 0.8116
Epoch 00341: val_acc did not improve from 0.84600
Epoch 342/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0189 - acc: 0.9801 - val_loss: 0.1892 - val_acc: 0.7936
Epoch 00342: val_acc did not improve from 0.84600
Epoch 343/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1858 - val_acc: 0.8008
Epoch 00343: val_acc did not improve from 0.84600
Epoch 344/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0191 - acc: 0.9799 - val_loss: 0.1825 - val_acc: 0.8060
Epoch 00344: val_acc did not improve from 0.84600
Epoch 345/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0189 - acc: 0.9800 - val_loss: 0.1863 - val_acc: 0.8000
Epoch 00345: val_acc did not improve from 0.84600
Epoch 346/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0183 - acc: 0.9811 - val_loss: 0.1845 - val_acc: 0.8040
Epoch 00346: val_acc did not improve from 0.84600
Epoch 347/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0185 - acc: 0.9800 - val_loss: 0.1767 - val_acc: 0.8080
Epoch 00347: val_acc did not improve from 0.84600
Epoch 348/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0176 - acc: 0.9817 - val_loss: 0.1776 - val_acc: 0.8100
Epoch 00348: val_acc did not improve from 0.84600
Epoch 349/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0188 - acc: 0.9804 - val_loss: 0.1796 - val_acc: 0.8088
Epoch 00349: val_acc did not improve from 0.84600
Epoch 350/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0179 - acc: 0.9813 - val_loss: 0.1819 - val_acc: 0.8060
Epoch 00350: val_acc did not improve from 0.84600
Epoch 351/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0174 - acc: 0.9821 - val_loss: 0.1833 - val_acc: 0.8048
Epoch 00351: val_acc did not improve from 0.84600
Epoch 352/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0167 - acc: 0.9831 - val_loss: 0.1824 - val_acc: 0.8024
Epoch 00352: val_acc did not improve from 0.84600
Epoch 353/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0183 - acc: 0.9805 - val_loss: 0.1802 - val_acc: 0.8088
Epoch 00353: val_acc did not improve from 0.84600
Epoch 354/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0172 - acc: 0.9824 - val_loss: 0.1810 - val_acc: 0.8064
Epoch 00354: val_acc did not improve from 0.84600
Epoch 355/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0187 - acc: 0.9803 - val_loss: 0.1797 - val_acc: 0.8076
Epoch 00355: val_acc did not improve from 0.84600
Epoch 356/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0180 - acc: 0.9811 - val_loss: 0.1849 - val_acc: 0.8028
Epoch 00356: val_acc did not improve from 0.84600
Epoch 357/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0190 - acc: 0.9804 - val_loss: 0.1891 - val_acc: 0.8004
Epoch 00357: val_acc did not improve from 0.84600
Epoch 358/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0195 - acc: 0.9792 - val_loss: 0.1853 - val_acc: 0.8032
Epoch 00358: val_acc did not improve from 0.84600
Epoch 359/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0173 - acc: 0.9819 - val_loss: 0.1864 - val_acc: 0.8016
Epoch 00359: val_acc did not improve from 0.84600
Epoch 360/500
7500/7500 [==============================] - 5s 653us/step - loss: 0.0180 - acc: 0.9820 - val_loss: 0.1813 - val_acc: 0.8068
Epoch 00360: val_acc did not improve from 0.84600
Epoch 361/500
7500/7500 [==============================] - 5s 682us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1809 - val_acc: 0.8048
Epoch 00361: val_acc did not improve from 0.84600
Epoch 362/500
7500/7500 [==============================] - 5s 698us/step - loss: 0.0187 - acc: 0.9803 - val_loss: 0.1894 - val_acc: 0.7984
Epoch 00362: val_acc did not improve from 0.84600
Epoch 363/500
7500/7500 [==============================] - 5s 693us/step - loss: 0.0178 - acc: 0.9819 - val_loss: 0.1963 - val_acc: 0.7904
Epoch 00363: val_acc did not improve from 0.84600
Epoch 364/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0167 - acc: 0.9827 - val_loss: 0.1868 - val_acc: 0.8000
Epoch 00364: val_acc did not improve from 0.84600
Epoch 365/500
7500/7500 [==============================] - 5s 646us/step - loss: 0.0185 - acc: 0.9805 - val_loss: 0.1828 - val_acc: 0.8056
Epoch 00365: val_acc did not improve from 0.84600
Epoch 366/500
7500/7500 [==============================] - 5s 643us/step - loss: 0.0174 - acc: 0.9820 - val_loss: 0.1905 - val_acc: 0.7916
Epoch 00366: val_acc did not improve from 0.84600
Epoch 367/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0190 - acc: 0.9805 - val_loss: 0.1823 - val_acc: 0.8076
Epoch 00367: val_acc did not improve from 0.84600
Epoch 368/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0182 - acc: 0.9812 - val_loss: 0.1856 - val_acc: 0.8040
Epoch 00368: val_acc did not improve from 0.84600
Epoch 369/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0192 - acc: 0.9803 - val_loss: 0.1892 - val_acc: 0.7964
Epoch 00369: val_acc did not improve from 0.84600
Epoch 370/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0182 - acc: 0.9811 - val_loss: 0.1829 - val_acc: 0.8028
Epoch 00370: val_acc did not improve from 0.84600
Epoch 371/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1935 - val_acc: 0.7940
Epoch 00371: val_acc did not improve from 0.84600
Epoch 372/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0177 - acc: 0.9819 - val_loss: 0.1812 - val_acc: 0.8060
Epoch 00372: val_acc did not improve from 0.84600
Epoch 373/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1796 - val_acc: 0.8092
Epoch 00373: val_acc did not improve from 0.84600
Epoch 374/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0185 - acc: 0.9801 - val_loss: 0.1896 - val_acc: 0.7972
Epoch 00374: val_acc did not improve from 0.84600
Epoch 375/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0186 - acc: 0.9805 - val_loss: 0.1850 - val_acc: 0.8032
Epoch 00375: val_acc did not improve from 0.84600
Epoch 376/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0168 - acc: 0.9828 - val_loss: 0.1793 - val_acc: 0.8060
Epoch 00376: val_acc did not improve from 0.84600
Epoch 377/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0177 - acc: 0.9816 - val_loss: 0.1836 - val_acc: 0.8016
Epoch 00377: val_acc did not improve from 0.84600
Epoch 378/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0178 - acc: 0.9812 - val_loss: 0.1855 - val_acc: 0.8024
Epoch 00378: val_acc did not improve from 0.84600
Epoch 379/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1878 - val_acc: 0.7972
Epoch 00379: val_acc did not improve from 0.84600
Epoch 380/500
7500/7500 [==============================] - 5s 630us/step - loss: 0.0160 - acc: 0.9833 - val_loss: 0.1824 - val_acc: 0.8060
Epoch 00380: val_acc did not improve from 0.84600
Epoch 381/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1873 - val_acc: 0.8020
Epoch 00381: val_acc did not improve from 0.84600
Epoch 382/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0174 - acc: 0.9819 - val_loss: 0.1784 - val_acc: 0.8124
Epoch 00382: val_acc did not improve from 0.84600
Epoch 383/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0167 - acc: 0.9821 - val_loss: 0.1825 - val_acc: 0.8080
Epoch 00383: val_acc did not improve from 0.84600
Epoch 384/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0175 - acc: 0.9821 - val_loss: 0.1875 - val_acc: 0.8004
Epoch 00384: val_acc did not improve from 0.84600
Epoch 385/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0184 - acc: 0.9804 - val_loss: 0.1847 - val_acc: 0.8036
Epoch 00385: val_acc did not improve from 0.84600
Epoch 386/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0171 - acc: 0.9824 - val_loss: 0.1858 - val_acc: 0.8020
Epoch 00386: val_acc did not improve from 0.84600
Epoch 387/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0170 - acc: 0.9819 - val_loss: 0.1772 - val_acc: 0.8108
Epoch 00387: val_acc did not improve from 0.84600
Epoch 388/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0188 - acc: 0.9804 - val_loss: 0.1814 - val_acc: 0.8076
Epoch 00388: val_acc did not improve from 0.84600
Epoch 389/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0172 - acc: 0.9821 - val_loss: 0.1850 - val_acc: 0.8016
Epoch 00389: val_acc did not improve from 0.84600
Epoch 390/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0176 - acc: 0.9812 - val_loss: 0.1817 - val_acc: 0.8068
Epoch 00390: val_acc did not improve from 0.84600
Epoch 391/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1837 - val_acc: 0.8028
Epoch 00391: val_acc did not improve from 0.84600
Epoch 392/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0163 - acc: 0.9833 - val_loss: 0.1839 - val_acc: 0.8032
Epoch 00392: val_acc did not improve from 0.84600
Epoch 393/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0175 - acc: 0.9816 - val_loss: 0.1843 - val_acc: 0.8044
Epoch 00393: val_acc did not improve from 0.84600
Epoch 394/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0172 - acc: 0.9821 - val_loss: 0.1850 - val_acc: 0.8048
Epoch 00394: val_acc did not improve from 0.84600
Epoch 395/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0170 - acc: 0.9820 - val_loss: 0.1838 - val_acc: 0.8000
Epoch 00395: val_acc did not improve from 0.84600
Epoch 396/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0175 - acc: 0.9817 - val_loss: 0.1830 - val_acc: 0.8076
Epoch 00396: val_acc did not improve from 0.84600
Epoch 397/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0159 - acc: 0.9839 - val_loss: 0.1865 - val_acc: 0.8020
Epoch 00397: val_acc did not improve from 0.84600
Epoch 398/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0176 - acc: 0.9819 - val_loss: 0.1822 - val_acc: 0.8064
Epoch 00398: val_acc did not improve from 0.84600
Epoch 399/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0157 - acc: 0.9836 - val_loss: 0.1855 - val_acc: 0.8016
Epoch 00399: val_acc did not improve from 0.84600
Epoch 400/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0163 - acc: 0.9832 - val_loss: 0.1815 - val_acc: 0.8068
Epoch 00400: val_acc did not improve from 0.84600
Epoch 401/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0183 - acc: 0.9808 - val_loss: 0.1804 - val_acc: 0.8072
Epoch 00401: val_acc did not improve from 0.84600
Epoch 402/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0165 - acc: 0.9831 - val_loss: 0.1947 - val_acc: 0.7932
Epoch 00402: val_acc did not improve from 0.84600
Epoch 403/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1845 - val_acc: 0.8048
Epoch 00403: val_acc did not improve from 0.84600
Epoch 404/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0172 - acc: 0.9820 - val_loss: 0.1794 - val_acc: 0.8108
Epoch 00404: val_acc did not improve from 0.84600
Epoch 405/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1818 - val_acc: 0.8052
Epoch 00405: val_acc did not improve from 0.84600
Epoch 406/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0181 - acc: 0.9808 - val_loss: 0.1904 - val_acc: 0.7972
Epoch 00406: val_acc did not improve from 0.84600
Epoch 407/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0148 - acc: 0.9853 - val_loss: 0.1842 - val_acc: 0.8000
Epoch 00407: val_acc did not improve from 0.84600
Epoch 408/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0169 - acc: 0.9825 - val_loss: 0.1837 - val_acc: 0.8036
Epoch 00408: val_acc did not improve from 0.84600
Epoch 409/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0164 - acc: 0.9832 - val_loss: 0.1893 - val_acc: 0.7984
Epoch 00409: val_acc did not improve from 0.84600
Epoch 410/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0182 - acc: 0.9808 - val_loss: 0.1777 - val_acc: 0.8124
Epoch 00410: val_acc did not improve from 0.84600
Epoch 411/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0183 - acc: 0.9811 - val_loss: 0.1832 - val_acc: 0.8048
Epoch 00411: val_acc did not improve from 0.84600
Epoch 412/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0166 - acc: 0.9833 - val_loss: 0.1847 - val_acc: 0.8032
Epoch 00412: val_acc did not improve from 0.84600
Epoch 413/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0168 - acc: 0.9824 - val_loss: 0.1864 - val_acc: 0.8008
Epoch 00413: val_acc did not improve from 0.84600
Epoch 414/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0176 - acc: 0.9815 - val_loss: 0.1954 - val_acc: 0.7880
Epoch 00414: val_acc did not improve from 0.84600
Epoch 415/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0171 - acc: 0.9821 - val_loss: 0.1945 - val_acc: 0.7952
Epoch 00415: val_acc did not improve from 0.84600
Epoch 416/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0175 - acc: 0.9820 - val_loss: 0.1899 - val_acc: 0.7984
Epoch 00416: val_acc did not improve from 0.84600
Epoch 417/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0162 - acc: 0.9831 - val_loss: 0.1823 - val_acc: 0.8064
Epoch 00417: val_acc did not improve from 0.84600
Epoch 418/500
7500/7500 [==============================] - 5s 673us/step - loss: 0.0168 - acc: 0.9819 - val_loss: 0.1892 - val_acc: 0.7988
Epoch 00418: val_acc did not improve from 0.84600
Epoch 419/500
7500/7500 [==============================] - 5s 654us/step - loss: 0.0162 - acc: 0.9832 - val_loss: 0.1879 - val_acc: 0.8028
Epoch 00419: val_acc did not improve from 0.84600
Epoch 420/500
7500/7500 [==============================] - 5s 650us/step - loss: 0.0161 - acc: 0.9833 - val_loss: 0.1841 - val_acc: 0.8028
Epoch 00420: val_acc did not improve from 0.84600
Epoch 421/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0175 - acc: 0.9813 - val_loss: 0.1891 - val_acc: 0.7964
Epoch 00421: val_acc did not improve from 0.84600
Epoch 422/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0167 - acc: 0.9825 - val_loss: 0.1795 - val_acc: 0.8064
Epoch 00422: val_acc did not improve from 0.84600
Epoch 423/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0158 - acc: 0.9833 - val_loss: 0.1854 - val_acc: 0.8016
Epoch 00423: val_acc did not improve from 0.84600
Epoch 424/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0156 - acc: 0.9839 - val_loss: 0.1833 - val_acc: 0.8064
Epoch 00424: val_acc did not improve from 0.84600
Epoch 425/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0159 - acc: 0.9835 - val_loss: 0.1884 - val_acc: 0.7960
Epoch 00425: val_acc did not improve from 0.84600
Epoch 426/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0163 - acc: 0.9829 - val_loss: 0.1849 - val_acc: 0.8008
Epoch 00426: val_acc did not improve from 0.84600
Epoch 427/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0158 - acc: 0.9839 - val_loss: 0.1873 - val_acc: 0.8008
Epoch 00427: val_acc did not improve from 0.84600
Epoch 428/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0171 - acc: 0.9820 - val_loss: 0.1796 - val_acc: 0.8116
Epoch 00428: val_acc did not improve from 0.84600
Epoch 429/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0168 - acc: 0.9820 - val_loss: 0.1874 - val_acc: 0.8020
Epoch 00429: val_acc did not improve from 0.84600
Epoch 430/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1861 - val_acc: 0.7996
Epoch 00430: val_acc did not improve from 0.84600
Epoch 431/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0160 - acc: 0.9835 - val_loss: 0.1883 - val_acc: 0.7996
Epoch 00431: val_acc did not improve from 0.84600
Epoch 432/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0161 - acc: 0.9833 - val_loss: 0.1916 - val_acc: 0.7960
Epoch 00432: val_acc did not improve from 0.84600
Epoch 433/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0166 - acc: 0.9821 - val_loss: 0.1878 - val_acc: 0.7956
Epoch 00433: val_acc did not improve from 0.84600
Epoch 434/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0156 - acc: 0.9836 - val_loss: 0.1861 - val_acc: 0.7996
Epoch 00434: val_acc did not improve from 0.84600
Epoch 435/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0156 - acc: 0.9835 - val_loss: 0.1864 - val_acc: 0.8032
Epoch 00435: val_acc did not improve from 0.84600
Epoch 436/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0158 - acc: 0.9832 - val_loss: 0.1852 - val_acc: 0.8044
Epoch 00436: val_acc did not improve from 0.84600
Epoch 437/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0150 - acc: 0.9845 - val_loss: 0.1949 - val_acc: 0.7928
Epoch 00437: val_acc did not improve from 0.84600
Epoch 438/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1939 - val_acc: 0.7956
Epoch 00438: val_acc did not improve from 0.84600
Epoch 439/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0149 - acc: 0.9851 - val_loss: 0.1868 - val_acc: 0.8032
Epoch 00439: val_acc did not improve from 0.84600
Epoch 440/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0159 - acc: 0.9836 - val_loss: 0.1917 - val_acc: 0.7948
Epoch 00440: val_acc did not improve from 0.84600
Epoch 441/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0150 - acc: 0.9847 - val_loss: 0.1905 - val_acc: 0.7972
Epoch 00441: val_acc did not improve from 0.84600
Epoch 442/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0163 - acc: 0.9827 - val_loss: 0.1894 - val_acc: 0.7956
Epoch 00442: val_acc did not improve from 0.84600
Epoch 443/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1906 - val_acc: 0.7960
Epoch 00443: val_acc did not improve from 0.84600
Epoch 444/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0156 - acc: 0.9839 - val_loss: 0.1898 - val_acc: 0.7980
Epoch 00444: val_acc did not improve from 0.84600
Epoch 445/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0146 - acc: 0.9849 - val_loss: 0.1866 - val_acc: 0.8012
Epoch 00445: val_acc did not improve from 0.84600
Epoch 446/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0158 - acc: 0.9837 - val_loss: 0.1841 - val_acc: 0.8036
Epoch 00446: val_acc did not improve from 0.84600
Epoch 447/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0150 - acc: 0.9843 - val_loss: 0.1844 - val_acc: 0.8020
Epoch 00447: val_acc did not improve from 0.84600
Epoch 448/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0153 - acc: 0.9844 - val_loss: 0.1901 - val_acc: 0.7992
Epoch 00448: val_acc did not improve from 0.84600
Epoch 449/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0170 - acc: 0.9820 - val_loss: 0.1875 - val_acc: 0.8016
Epoch 00449: val_acc did not improve from 0.84600
Epoch 450/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0155 - acc: 0.9835 - val_loss: 0.1931 - val_acc: 0.7940
Epoch 00450: val_acc did not improve from 0.84600
Epoch 451/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0171 - acc: 0.9820 - val_loss: 0.1829 - val_acc: 0.8056
Epoch 00451: val_acc did not improve from 0.84600
Epoch 452/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0154 - acc: 0.9835 - val_loss: 0.1877 - val_acc: 0.7992
Epoch 00452: val_acc did not improve from 0.84600
Epoch 453/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0157 - acc: 0.9839 - val_loss: 0.1910 - val_acc: 0.7956
Epoch 00453: val_acc did not improve from 0.84600
Epoch 454/500
7500/7500 [==============================] - 5s 630us/step - loss: 0.0161 - acc: 0.9832 - val_loss: 0.1934 - val_acc: 0.7936
Epoch 00454: val_acc did not improve from 0.84600
Epoch 455/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0151 - acc: 0.9843 - val_loss: 0.1867 - val_acc: 0.8036
Epoch 00455: val_acc did not improve from 0.84600
Epoch 456/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0160 - acc: 0.9837 - val_loss: 0.1876 - val_acc: 0.8028
Epoch 00456: val_acc did not improve from 0.84600
Epoch 457/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0161 - acc: 0.9829 - val_loss: 0.1930 - val_acc: 0.7940
Epoch 00457: val_acc did not improve from 0.84600
Epoch 458/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0163 - acc: 0.9835 - val_loss: 0.1870 - val_acc: 0.7988
Epoch 00458: val_acc did not improve from 0.84600
Epoch 459/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1873 - val_acc: 0.8008
Epoch 00459: val_acc did not improve from 0.84600
Epoch 460/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0146 - acc: 0.9849 - val_loss: 0.1938 - val_acc: 0.7952
Epoch 00460: val_acc did not improve from 0.84600
Epoch 461/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0156 - acc: 0.9835 - val_loss: 0.1809 - val_acc: 0.8092
Epoch 00461: val_acc did not improve from 0.84600
Epoch 462/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0170 - acc: 0.9817 - val_loss: 0.1875 - val_acc: 0.8024
Epoch 00462: val_acc did not improve from 0.84600
Epoch 463/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0155 - acc: 0.9836 - val_loss: 0.1867 - val_acc: 0.8032
Epoch 00463: val_acc did not improve from 0.84600
Epoch 464/500
7500/7500 [==============================] - 5s 641us/step - loss: 0.0152 - acc: 0.9843 - val_loss: 0.1955 - val_acc: 0.7932
Epoch 00464: val_acc did not improve from 0.84600
Epoch 465/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0152 - acc: 0.9841 - val_loss: 0.1894 - val_acc: 0.8004
Epoch 00465: val_acc did not improve from 0.84600
Epoch 466/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0153 - acc: 0.9840 - val_loss: 0.1880 - val_acc: 0.8024
Epoch 00466: val_acc did not improve from 0.84600
Epoch 467/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0154 - acc: 0.9840 - val_loss: 0.1896 - val_acc: 0.8000
Epoch 00467: val_acc did not improve from 0.84600
Epoch 468/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1877 - val_acc: 0.8012
Epoch 00468: val_acc did not improve from 0.84600
Epoch 469/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0149 - acc: 0.9845 - val_loss: 0.1914 - val_acc: 0.7944
Epoch 00469: val_acc did not improve from 0.84600
Epoch 470/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0144 - acc: 0.9851 - val_loss: 0.1860 - val_acc: 0.8024
Epoch 00470: val_acc did not improve from 0.84600
Epoch 471/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0153 - acc: 0.9837 - val_loss: 0.1819 - val_acc: 0.8084
Epoch 00471: val_acc did not improve from 0.84600
Epoch 472/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0161 - acc: 0.9828 - val_loss: 0.1953 - val_acc: 0.7908
Epoch 00472: val_acc did not improve from 0.84600
Epoch 473/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0143 - acc: 0.9848 - val_loss: 0.1809 - val_acc: 0.8076
Epoch 00473: val_acc did not improve from 0.84600
Epoch 474/500
7500/7500 [==============================] - 5s 630us/step - loss: 0.0155 - acc: 0.9837 - val_loss: 0.1953 - val_acc: 0.7900
Epoch 00474: val_acc did not improve from 0.84600
Epoch 475/500
7500/7500 [==============================] - 5s 644us/step - loss: 0.0154 - acc: 0.9833 - val_loss: 0.1857 - val_acc: 0.7976
Epoch 00475: val_acc did not improve from 0.84600
Epoch 476/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0156 - acc: 0.9832 - val_loss: 0.1836 - val_acc: 0.8056
Epoch 00476: val_acc did not improve from 0.84600
Epoch 477/500
7500/7500 [==============================] - 5s 635us/step - loss: 0.0146 - acc: 0.9848 - val_loss: 0.1815 - val_acc: 0.8096
Epoch 00477: val_acc did not improve from 0.84600
Epoch 478/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0149 - acc: 0.9848 - val_loss: 0.1846 - val_acc: 0.8028
Epoch 00478: val_acc did not improve from 0.84600
Epoch 479/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0148 - acc: 0.9841 - val_loss: 0.1866 - val_acc: 0.8024
Epoch 00479: val_acc did not improve from 0.84600
Epoch 480/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0148 - acc: 0.9845 - val_loss: 0.1878 - val_acc: 0.8000
Epoch 00480: val_acc did not improve from 0.84600
Epoch 481/500
7500/7500 [==============================] - 5s 639us/step - loss: 0.0138 - acc: 0.9856 - val_loss: 0.1859 - val_acc: 0.8052
Epoch 00481: val_acc did not improve from 0.84600
Epoch 482/500
7500/7500 [==============================] - 5s 640us/step - loss: 0.0143 - acc: 0.9849 - val_loss: 0.1797 - val_acc: 0.8080
Epoch 00482: val_acc did not improve from 0.84600
Epoch 483/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0160 - acc: 0.9829 - val_loss: 0.1861 - val_acc: 0.8052
Epoch 00483: val_acc did not improve from 0.84600
Epoch 484/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0144 - acc: 0.9844 - val_loss: 0.1836 - val_acc: 0.8052
Epoch 00484: val_acc did not improve from 0.84600
Epoch 485/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0138 - acc: 0.9859 - val_loss: 0.1843 - val_acc: 0.8048
Epoch 00485: val_acc did not improve from 0.84600
Epoch 486/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0137 - acc: 0.9860 - val_loss: 0.1924 - val_acc: 0.7964
Epoch 00486: val_acc did not improve from 0.84600
Epoch 487/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0150 - acc: 0.9843 - val_loss: 0.1864 - val_acc: 0.7992
Epoch 00487: val_acc did not improve from 0.84600
Epoch 488/500
7500/7500 [==============================] - 5s 631us/step - loss: 0.0150 - acc: 0.9841 - val_loss: 0.1907 - val_acc: 0.8004
Epoch 00488: val_acc did not improve from 0.84600
Epoch 489/500
7500/7500 [==============================] - 5s 634us/step - loss: 0.0146 - acc: 0.9845 - val_loss: 0.1877 - val_acc: 0.8020
Epoch 00489: val_acc did not improve from 0.84600
Epoch 490/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0150 - acc: 0.9844 - val_loss: 0.1876 - val_acc: 0.8032
Epoch 00490: val_acc did not improve from 0.84600
Epoch 491/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0155 - acc: 0.9839 - val_loss: 0.1893 - val_acc: 0.7984
Epoch 00491: val_acc did not improve from 0.84600
Epoch 492/500
7500/7500 [==============================] - 5s 637us/step - loss: 0.0149 - acc: 0.9847 - val_loss: 0.1752 - val_acc: 0.8148
Epoch 00492: val_acc did not improve from 0.84600
Epoch 493/500
7500/7500 [==============================] - 5s 638us/step - loss: 0.0154 - acc: 0.9835 - val_loss: 0.1774 - val_acc: 0.8128
Epoch 00493: val_acc did not improve from 0.84600
Epoch 494/500
7500/7500 [==============================] - 5s 633us/step - loss: 0.0145 - acc: 0.9851 - val_loss: 0.1856 - val_acc: 0.8020
Epoch 00494: val_acc did not improve from 0.84600
Epoch 495/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0148 - acc: 0.9848 - val_loss: 0.1789 - val_acc: 0.8096
Epoch 00495: val_acc did not improve from 0.84600
Epoch 496/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0134 - acc: 0.9860 - val_loss: 0.1879 - val_acc: 0.8024
Epoch 00496: val_acc did not improve from 0.84600
Epoch 497/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0146 - acc: 0.9848 - val_loss: 0.1767 - val_acc: 0.8144
Epoch 00497: val_acc did not improve from 0.84600
Epoch 498/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0136 - acc: 0.9859 - val_loss: 0.1873 - val_acc: 0.8036
Epoch 00498: val_acc did not improve from 0.84600
Epoch 499/500
7500/7500 [==============================] - 5s 636us/step - loss: 0.0140 - acc: 0.9859 - val_loss: 0.1845 - val_acc: 0.8076
Epoch 00499: val_acc did not improve from 0.84600
Epoch 500/500
7500/7500 [==============================] - 5s 632us/step - loss: 0.0136 - acc: 0.9861 - val_loss: 0.1843 - val_acc: 0.8044
Epoch 00500: val_acc did not improve from 0.84600
acc: 84.60%
| MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Checking predictions on a small sample of native data | input_seqs = ROOT_DIR + 'expressyeaself/models/lstm/native_sample.txt'
model_to_use = 'lstm_sequential_2d'
lstm_result = construct.get_predictions_for_input_file(input_seqs, model_to_use, sort_df=True, write_to_file=False)
lstm_result.to_csv('lstm_result')
lstm_result | _____no_output_____ | MIT | expressyeaself/models/lstm/LSTM_builder.ipynb | yeastpro/expressYeaself |
Welcome to the Woodgreen Data Science & Python Program by Fireside AnalyticsData science is the process of ethically acquiring, engineering, analyzing, visualizaing and ultimately, creating value with data.In this tutorial, participants will be introduced to the Python programming language in this Python cloud environment called Google Colab. For more information about this tutorial or other tutorials by Fireside Analytics, contact: info@firesideanalytics.comTable of contents How does a computer work? What is "data"? An introduction to Python **Let's get started! Firstly, this page you are reading is not regular website, it is an interactive computer programming environment called a Colab notebook that lets you write and execute code in Python.** 1. How does a computer work? A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTSEXAMPLES OF INPUTS1. Keyboard2. Mouse3. Touch screenPROCESSES1. CPU - Central Processing Unit2. Data storage3. Converts inputs from words and numbers to 1s and 0s4. Computes 1s and 0s5. Produces outputs and informationOUTPUTS1. Screen - words, numbers, pictures or sounds2. Printer3. Speaker 2. What is "data"? A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTS1. Computers use many on and off switches to work2. The 'on' switch is represented by a '1' and the 'off' switch is 3. A BIT is a one or a zero, and a BYTE is a combination of 8 ones and zeros e.g., 1100 00104. Combinations of Ones and Zeros in a computer, represent whole words and numbers, symbols and even pictures in the real world5. Information stored in ones and zeros, in bits and bytes, is data!* The letter a = 0110 0001* The letter b = 0110 0010* The letter A = 0100 0001* The letter B = 0100 0010* The symbol @ = 1000 0000 This conversion is done with the ASCII Code, American Standard Code Information Interchange *Computer programming is the process of giving a computer instructions in human readable language so a computer will know what to do in computer language.* 3. An introduction to Python Let's get to know Python. The following code is an example of a Python Progam. Run the code by clicking on the 'play' button and you will see the result of your program beneath the code. | ## Your first computer progam can be to say hello!
print ("Hello, World")
# We will need to learn some syntax! Syntax are the words used in a Python program
# the '#' sign tells Python to ignore a line. We use it for notes that we want humans to read
# print() is a function built into the core of Python
# For more sophisticed operations we'll load libraries which come with additional functions that we can use
# Famous ones are numpy, pandas, matplotlib, seaborn, and scikitlearn
# Now, let's write some programs!
# Edit the line below to add your first name between the ""
## Here we assign the letters between "" to an object called "my_name" - it is now stored and you can call it later
## Like saving a number in your phone versus just typing it in and calling it
my_name = ""
# Let's see what we've created
my_name
greeting = "Hello, world, my name is "
# Let's look at it
greeting
# The = sign is what we call an 'assignment operator' and it assigns things
# See how we use the '+' sign
print(greeting + my_name)
# Asking for input, using simple function and printing it
def say_hello():
username = input("What is your name?\n")
print("Hello " + username)
# Lets call the function
say_hello()
# Creating an 'If else' conditional block inside the function. Here we are validating the response entered.
# If the person simply hits "Enter" without entering any value in the field,
# then the if statement prints "You can't introduce yourself if you don't add your name!"
# the == operator is used to test if something is equal to something else
def say_hello():
username = input("What is your name?\n")
if username == "":
print("You can't introduce yourself if you don't add your name!")
else:
print("Hello " + username)
# While calling the function, try leaving the field blank
say_hello()
# Dealing with a blank
def say_hello(name):
if name == "":
print("You can't introduce yourself if you don't add your name!")
else:
print(greeting + name)
# Click the "play" button to execute this code.
say_hello(my_name)
# In programming there are often many ways to do things, for example
print("Hello world, my name is " + my_name + ".") | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
**We can do simple calculations in Python** | 5 + 5
# Some actions already programmed in:
x = 5
print(x + 7)
# What happens when we say "X=5"
# x 'points' at the number 5
x = 5
print("Initial x is:", x)
# y now 'points' at 'x' which 'points' at 5, so then y points at 5
y = x
print("Initial y is:", y)
x = 6
# What happens when we now change what x is?
print("Current x is:", x)
print("Current y is:", y) | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
------------------------------------------------------------------------ **We can do complex calculations in Python** - Remember we said Netflix users stream 404,444 hours of movies every minute? Let's calculate how many days that is! | ## In Python we create objects
## Converting from 404444 hours to days, we divide by___________?
days_watching_netflix = 404444/24 | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
How can we do a survey in Python? We type 'input' to let Python know to wait for a user response. Once you type in the name, Python will remember it!Press 'enter' after your input. | response_1 = input("Response 1: What is your name?")
## We can now look at the response
response_1
response_2 = input("Response 2: What is your name?")
response_3 = input("Response 3: What is your name?")
response_4 = input("Response 4: What is your name?")
response_5 = input("Response 5: What is your name?") | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
Let's look at response_5 | print(response_1,
response_2,
response_3,
response_4,
response_5) | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
We can also add the names one at a time by typing them. | ## Let's create an object for the 5 names from question 1
survey_names = [response_1, response_2, response_3, response_4, response_5]
## Let's look at the object we've just created!
survey_names
print(survey_names) | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
Let's make a simple bar chart in Python | import matplotlib.pyplot as plt
x = ['A', 'B', 'C', 'D', 'E']
y = [22, 9, 40, 27, 55]
plt.bar(x, y, color = 'red')
plt.title('Simple Bar Chart')
plt.xlabel('Width Names')
plt.ylabel('Height Values')
plt.show()
# Replot the same chart and change the color of the bars | _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
Here's a sample chart with some survey responses. | import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
data = [3,2]
labels = ['yes', 'no']
plt.xticks(range(len(data)), labels)
plt.xlabel('Responses')
plt.ylabel('Number of People')
plt.title('Shingai - Woodgreen Data Science & Python Program: Survey Results for Questions 2: "Do you know how a computer works?"')
plt.bar(range(len(data)), data, color = 'blue')
plt.show()
| _____no_output_____ | MIT | Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb | tjido/woodgreen |
CREAZIONE MODELLO SARIMA REGIONE SARDEGNA | import pandas as pd
df = pd.read_csv('../../csv/regioni/sardegna.csv')
df.head()
df['DATA'] = pd.to_datetime(df['DATA'])
df.info()
df=df.set_index('DATA')
df.head() | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Creazione serie storica dei decessi totali della regione Sardegna | ts = df.TOTALE
ts.head()
from datetime import datetime
from datetime import timedelta
start_date = datetime(2015,1,1)
end_date = datetime(2020,9,30)
lim_ts = ts[start_date:end_date]
#visulizzo il grafico
import matplotlib.pyplot as plt
plt.figure(figsize=(12,6))
plt.title('Decessi mensili regione Sardegna dal 2015 a settembre 2020', size=20)
plt.plot(lim_ts)
for year in range(start_date.year,end_date.year+1):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.5) | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Decomposizione | from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
ts_trend = decomposition.trend #andamento della curva
ts_seasonal = decomposition.seasonal #stagionalità
ts_residual = decomposition.resid #parti rimanenti
plt.subplot(411)
plt.plot(ts,label='original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(ts_trend,label='trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(ts_seasonal,label='seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(ts_residual,label='residual')
plt.legend(loc='best')
plt.tight_layout() | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Test di stazionarietà | from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
critical_value = dftest[4]['5%']
test_statistic = dftest[0]
alpha = 1e-3
pvalue = dftest[1]
if pvalue < alpha and test_statistic < critical_value: # null hypothesis: x is non stationary
print("X is stationary")
return True
else:
print("X is not stationary")
return False
test_stationarity(ts) | X is not stationary
| Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Suddivisione in Train e Test Train: da gennaio 2015 a ottobre 2019; Test: da ottobre 2019 a dicembre 2019. | from datetime import datetime
train_end = datetime(2019,10,31)
test_end = datetime (2019,12,31)
covid_end = datetime(2020,9,30)
from dateutil.relativedelta import *
tsb = ts[:test_end]
decomposition = seasonal_decompose(tsb, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
tsb_trend = decomposition.trend #andamento della curva
tsb_seasonal = decomposition.seasonal #stagionalità
tsb_residual = decomposition.resid #parti rimanenti
tsb_diff = pd.Series(tsb_trend)
d = 0
while test_stationarity(tsb_diff) is False:
tsb_diff = tsb_diff.diff().dropna()
d = d + 1
print(d)
#TEST: dal 01-01-2015 al 31-10-2019
train = tsb[:train_end]
#TRAIN: dal 01-11-2019 al 31-12-2019
test = tsb[train_end + relativedelta(months=+1): test_end] | X is not stationary
X is not stationary
X is not stationary
X is stationary
3
| Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Grafici di Autocorrelazione e Autocorrelazione Parziale | from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(ts, lags =12)
plot_pacf(ts, lags =12)
plt.show() | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Creazione del modello SARIMA sul Train | from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train, order=(6,1,8))
model_fit = model.fit()
print(model_fit.summary()) | c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.
warnings.warn('No frequency information was'
c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.
warnings.warn('No frequency information was'
c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\statespace\sarimax.py:977: UserWarning: Non-invertible starting MA parameters found. Using zeros as starting parameters.
warn('Non-invertible starting MA parameters found.'
| Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Verifica della stazionarietà dei residui del modello ottenuto | residuals = model_fit.resid
test_stationarity(residuals)
plt.figure(figsize=(12,6))
plt.title('Confronto valori previsti dal modello con valori reali del Train', size=20)
plt.plot (train.iloc[1:], color='red', label='train values')
plt.plot (model_fit.fittedvalues.iloc[1:], color = 'blue', label='model values')
plt.legend()
plt.show()
conf = model_fit.conf_int()
plt.figure(figsize=(12,6))
plt.title('Intervalli di confidenza del modello', size=20)
plt.plot(conf)
plt.xticks(rotation=45)
plt.show() | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Predizione del modello sul Test | #inizio e fine predizione
pred_start = test.index[0]
pred_end = test.index[-1]
#pred_start= len(train)
#pred_end = len(tsb)
#predizione del modello sul test
predictions_test= model_fit.predict(start=pred_start, end=pred_end)
plt.plot(test, color='red', label='actual')
plt.plot(predictions_test, label='prediction' )
plt.xticks(rotation=45)
plt.legend()
plt.show()
print(predictions_test)
# Accuracy metrics
import numpy as np
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE: errore percentuale medio assoluto
me = np.mean(forecast - actual) # ME: errore medio
mae = np.mean(np.abs(forecast - actual)) # MAE: errore assoluto medio
mpe = np.mean((forecast - actual)/actual) # MPE: errore percentuale medio
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr: correlazione tra effettivo e previsione
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax: errore min-max
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse,
'corr':corr, 'minmax':minmax})
forecast_accuracy(predictions_test, test)
import numpy as np
from statsmodels.tools.eval_measures import rmse
nrmse = rmse(predictions_test, test)/(np.max(test)-np.min(test))
print('NRMSE: %f'% nrmse) | NRMSE: 0.028047
| Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Predizione del modello compreso l'anno 2020 | #inizio e fine predizione
start_prediction = ts.index[0]
end_prediction = ts.index[-1]
predictions_tot = model_fit.predict(start=start_prediction, end=end_prediction)
plt.figure(figsize=(12,6))
plt.title('Previsione modello su dati osservati - dal 2015 al 30 settembre 2020', size=20)
plt.plot(ts, color='blue', label='actual')
plt.plot(predictions_tot.iloc[1:], color='red', label='predict')
plt.xticks(rotation=45)
plt.legend(prop={'size': 12})
plt.show()
diff_predictions_tot = (ts - predictions_tot)
plt.figure(figsize=(12,6))
plt.title('Differenza tra i valori osservati e i valori stimati del modello', size=20)
plt.plot(diff_predictions_tot)
plt.show()
diff_predictions_tot['24-02-2020':].sum()
predictions_tot.to_csv('../../csv/pred/predictions_SARIMA_sardegna.csv') | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Intervalli di confidenza della previsione totale | forecast = model_fit.get_prediction(start=start_prediction, end=end_prediction)
in_c = forecast.conf_int()
print(forecast.predicted_mean)
print(in_c)
print(forecast.predicted_mean - in_c['lower TOTALE'])
plt.plot(in_c)
plt.show()
upper = in_c['upper TOTALE']
lower = in_c['lower TOTALE']
lower.to_csv('../../csv/lower/predictions_SARIMA_sardegna_lower.csv')
upper.to_csv('../../csv/upper/predictions_SARIMA_sardegna_upper.csv') | _____no_output_____ | Unlicense | Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb | SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths |
Preparation | import pandas as pd
df_mortality = pd.read_excel(io='MortalityDataWHR2021C2.xlsx')
df_happiness = pd.read_excel(io='DataForFigure2.1WHR2021C2.xls')
df_regions = df_happiness[['Country name', 'Regional indicator']]
df = df_regions.merge(df_mortality)
df.head() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Islands Number of Islands | df.Island.sum() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Which region had more Islands? | df.groupby('Regional indicator').Island.sum() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Show all Columns for these Islands | mask_region = df['Regional indicator'] == 'Western Europe'
mask_island = df['Island'] == 1
df_europe_islands = df[mask_region & mask_island]
df_europe_islands | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Mean Age of across All Islands? | df_europe_islands['Median age'].mean() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Female Heads of State Number of Countries with Female Heads of State | df['Female head of government'].sum() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Which region had more Female Heads of State? | df.groupby('Regional indicator')['Female head of government'].sum().sort_values(ascending=False) | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Show all Columns for these Countries | mask_region = df['Regional indicator'] == 'Western Europe'
mask_female = df['Female head of government'] == 1
df_europe_femaleheads = df[mask_region & mask_female]
df_europe_femaleheads | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Mean Age of across All Countries? | df_europe_femaleheads['Median age'].mean() | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Pivot Tables | df_panel = pd.read_excel(io='DataPanelWHR2021C2.xls')
df = df_panel.merge(df_regions)
df.pivot_table(index='Regional indicator', columns='year', values='Log GDP per capita') | _____no_output_____ | MIT | #01. Data Tables & Basic Concepts of Programming/Untitled.ipynb | gabisintope/machine-learning-program |
Occupation Introduction:Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries | import pandas as pd
import numpy as np | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users. | url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user'
users = pd.read_csv(url, sep='\|')
users | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 4. Discover what is the mean age per occupation | users.groupby(['occupation'])['age'].mean() | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 5. Discover the Male ratio per occupation and sort it from the most to the least | if 'is_male' not in users:
users['is_male'] = users['gender'].apply(lambda x: x == 'M')
users
male_employees = users.loc[users['gender'] == 'M'].groupby(['occupation']).size().astype('float')
# print("male employees:", male_employees)
female_employees = users.loc[users['gender'] == 'F'].groupby(['occupation']).size().astype('float')
# print(type(female_employees[0]))
# print("female employees:", female_employees)
m_f_ratio_occupations = male_employees.divide(female_employees, fill_value=0)
m_f_ratio_occupations.sort_values(ascending=False) | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 6. For each occupation, calculate the minimum and maximum ages | users.groupby(['occupation'])['age'].min()
users.groupby(['occupation'])['age'].max() | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 7. For each combination of occupation and gender, calculate the mean age | users.loc[users['gender'] == 'M'].groupby(['occupation'])['age'].mean()
users.loc[users['gender']=='F'].groupby(['occupation'])['age'].mean() | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Step 8. For each occupation present the percentage of women and men | percent_male = np.abs((male_employees - female_employees))/male_employees
percent_male
percent_female = 1 - percent_male
percent_female | _____no_output_____ | BSD-3-Clause | 03_Grouping/Occupation/Exercise.ipynb | mtzupan/pandas_exercises |
Sentiment analysis with support vector machinesIn this notebook, we will revisit a learning task that we encountered earlier in the course: predicting the *sentiment* (positive or negative) of a single sentence taken from a review of a movie, restaurant, or product. The data set consists of 3000 labeled sentences, which we divide into a training set of size 2500 and a test set of size 500. Previously we found a logistic regression classifier. Today we will use a support vector machine.Before starting on this notebook, make sure the folder `sentiment_labelled_sentences` (containing the data file `full_set.txt`) is in the same directory. Recall that the data can be downloaded from https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences. 1. Loading and preprocessing the data Here we follow exactly the same steps as we did earlier. | %matplotlib inline
import string
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
from sklearn.feature_extraction.text import CountVectorizer
## Read in the data set.
with open("sentiment_labelled_sentences/full_set.txt") as f:
content = f.readlines()
## Remove leading and trailing white space
content = [x.strip() for x in content]
## Separate the sentences from the labels
sentences = [x.split("\t")[0] for x in content]
labels = [x.split("\t")[1] for x in content]
## Transform the labels from '0 v.s. 1' to '-1 v.s. 1'
y = np.array(labels, dtype='int8')
y = 2*y - 1
## full_remove takes a string x and a list of characters removal_list
## returns x with all the characters in removal_list replaced by ' '
def full_remove(x, removal_list):
for w in removal_list:
x = x.replace(w, ' ')
return x
## Remove digits
digits = [str(x) for x in range(10)]
digit_less = [full_remove(x, digits) for x in sentences]
## Remove punctuation
punc_less = [full_remove(x, list(string.punctuation)) for x in digit_less]
## Make everything lower-case
sents_lower = [x.lower() for x in punc_less]
## Define our stop words
stop_set = set(['the', 'a', 'an', 'i', 'he', 'she', 'they', 'to', 'of', 'it', 'from'])
## Remove stop words
sents_split = [x.split() for x in sents_lower]
sents_processed = [" ".join(list(filter(lambda a: a not in stop_set, x))) for x in sents_split]
## Transform to bag of words representation.
vectorizer = CountVectorizer(analyzer = "word", tokenizer = None, preprocessor = None, stop_words = None, max_features = 4500)
data_features = vectorizer.fit_transform(sents_processed)
## Append '1' to the end of each vector.
data_mat = data_features.toarray()
## Split the data into testing and training sets
np.random.seed(0)
test_inds = np.append(np.random.choice((np.where(y==-1))[0], 250, replace=False), np.random.choice((np.where(y==1))[0], 250, replace=False))
train_inds = list(set(range(len(labels))) - set(test_inds))
train_data = data_mat[train_inds,]
train_labels = y[train_inds]
test_data = data_mat[test_inds,]
test_labels = y[test_inds]
print("train data: ", train_data.shape)
print("test data: ", test_data.shape) | train data: (2500, 4500)
test data: (500, 4500)
| MIT | Assignment 6/sentiment_svm/sentiment-svm.ipynb | ksopan/Edx_Machine_Learning_DSE220x |
2. Fitting a support vector machine to the dataIn support vector machines, we are given a set of examples $(x_1, y_1), \ldots, (x_n, y_n)$ and we want to find a weight vector $w \in \mathbb{R}^d$ that solves the following optimization problem:$$ \min_{w \in \mathbb{R}^d} \| w \|^2 + C \sum_{i=1}^n \xi_i $$$$ \text{subject to } y_i \langle w, x_i \rangle \geq 1 - \xi_i \text{ for all } i=1,\ldots, n$$`scikit-learn` provides an SVM solver that we will use. The following routine takes as input the constant `C` (from the above optimization problem) and returns the training and test error of the resulting SVM model. It is invoked as follows:* `training_error, test_error = fit_classifier(C)`The default value for parameter `C` is 1.0. | from sklearn import svm
def fit_classifier(C_value=1.0):
clf = svm.LinearSVC(C=C_value, loss='hinge')
clf.fit(train_data,train_labels)
## Get predictions on training data
train_preds = clf.predict(train_data)
train_error = float(np.sum((train_preds > 0.0) != (train_labels > 0.0)))/len(train_labels)
## Get predictions on test data
test_preds = clf.predict(test_data)
test_error = float(np.sum((test_preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
##
return train_error, test_error
cvals = [0.01,0.1,1.0,10.0,100.0,1000.0,10000.0]
for c in cvals:
train_error, test_error = fit_classifier(c)
print ("Error rate for C = %0.2f: train %0.3f test %0.3f" % (c, train_error, test_error)) | Error rate for C = 0.01: train 0.215 test 0.250
Error rate for C = 0.10: train 0.074 test 0.174
Error rate for C = 1.00: train 0.011 test 0.152
Error rate for C = 10.00: train 0.002 test 0.188
Error rate for C = 100.00: train 0.002 test 0.198
Error rate for C = 1000.00: train 0.003 test 0.212
Error rate for C = 10000.00: train 0.001 test 0.208
| MIT | Assignment 6/sentiment_svm/sentiment-svm.ipynb | ksopan/Edx_Machine_Learning_DSE220x |
3. Evaluating C by k-fold cross-validationAs we can see, the choice of `C` has a very significant effect on the performance of the SVM classifier. We were able to assess this because we have a separate test set. In general, however, this is a luxury we won't possess. How can we choose `C` based only on the training set?A reasonable way to estimate the error associated with a specific value of `C` is by **`k-fold cross validation`**:* Partition the training set `S` into `k` equal-sized sized subsets `S_1, S_2, ..., S_k`.* For `i=1,2,...,k`, train a classifier with parameter `C` on `S - S_i` (all the training data except `S_i`) and test it on `S_i` to get error estimate `e_i`.* Average the errors: `(e_1 + ... + e_k)/k`The following procedure, **cross_validation_error**, does exactly this. It takes as input:* the training set `x,y`* the value of `C` to be evaluated* the integer `k`and it returns the estimated error of the classifier for that particular setting of `C`. Look over the code carefully to understand exactly what it is doing. | def cross_validation_error(x,y,C_value,k):
n = len(y)
## Randomly shuffle indices
indices = np.random.permutation(n)
## Initialize error
err = 0.0
## Iterate over partitions
for i in range(k):
## Partition indices
test_indices = indices[int(i*(n/k)):int((i+1)*(n/k) - 1)]
train_indices = np.setdiff1d(indices, test_indices)
## Train classifier with parameter c
clf = svm.LinearSVC(C=C_value, loss='hinge')
clf.fit(x[train_indices], y[train_indices])
## Get predictions on test partition
preds = clf.predict(x[test_indices])
## Compute error
err += float(np.sum((preds > 0.0) != (y[test_indices] > 0.0)))/len(test_indices)
return err/k | _____no_output_____ | MIT | Assignment 6/sentiment_svm/sentiment-svm.ipynb | ksopan/Edx_Machine_Learning_DSE220x |
4. Picking a value of C The procedure **cross_validation_error** (above) evaluates a single candidate value of `C`. We need to use it repeatedly to identify a good `C`. **For you to do:** Write a function to choose `C`. It will be invoked as follows:* `c, err = choose_parameter(x,y,k)`where* `x,y` is the training data* `k` is the number of folds of cross-validation* `c` is chosen value of the parameter `C`* `err` is the cross-validation error estimate at `c`Note: This is a tricky business because a priori, even the order of magnitude of `C` is unknown. Should it be 0.0001 or 10000? You might want to think about trying multiple values that are arranged in a geometric progression (such as powers of ten). *In addition to returning a specific value of `C`, your function should **plot** the cross-validation errors for all the values of `C` it tried out (possibly using a log-scale for the `C`-axis).* | def choose_parameter(x,y,k):
C = [0.0001,0.001,0.01,0.1,1,10,100,1000,10000]
err=[]
for c in C:
err.append(cross_validation_error(x,y,c,k))
err_min,cc=min(list(zip(err,C))) #C value for minimum error
plt.plot(np.log(C),err)
plt.xlabel("Log(C)")
plt.ylabel("Corresponding error")
return cc,err_min | _____no_output_____ | MIT | Assignment 6/sentiment_svm/sentiment-svm.ipynb | ksopan/Edx_Machine_Learning_DSE220x |
Now let's try out your routine! | c, err = choose_parameter(train_data, train_labels, 10)
print("Choice of C: ", c)
print("Cross-validation error estimate: ", err)
## Train it and test it
clf = svm.LinearSVC(C=c, loss='hinge')
clf.fit(train_data, train_labels)
preds = clf.predict(test_data)
error = float(np.sum((preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
print("Test error: ", error) | Choice of C: 1
Cross-validation error estimate: 0.18554216867469878
Test error: 0.152
| MIT | Assignment 6/sentiment_svm/sentiment-svm.ipynb | ksopan/Edx_Machine_Learning_DSE220x |
Distribución normal teórica$$P(X) = \frac{1}{\sigma \sqrt{2 \pi}} \exp{\left[-\frac{1}{2}\left(\frac{X-\mu}{\sigma} \right)^2 \right]}$$* $\mu$: media de la distribución* $\sigma$: desviación estándar de la distribución | # definimos nuestra distribución gaussiana
def gaussian(x, mu, sigma):
return 1/(sigma*np.sqrt(2*np.pi))*np.exp(-0.5*pow((x-mu)/sigma,2))
x = np.arange(-4,4,0.1)
y = gaussian(x, 0.0, 1.0)
plt.plot(x, y)
# usando scipy
dist = norm(0, 1)
x = np.arange(-4,4,0.1)
y = [dist.pdf(value) for value in x]
plt.plot(x, y)
# calculando la distribución acumulada
dist = norm(0, 1)
x = np.arange(-4,4,0.1)
y = [dist.cdf(value) for value in x]
plt.plot(x, y) | _____no_output_____ | MIT | probability/probability-course/notebooks/[Clase9]Distribucion_normal.ipynb | Elkinmt19/data-science-dojo |
Distribución normal (gausiana) a partir de los datos* *El archivo excel* lo puedes descargar en esta página: https://seattlecentral.edu/qelp/sets/057/057.html | df = pd.read_excel('s057.xls')
arr = df['Normally Distributed Housefly Wing Lengths'].values[4:]
values, dist = np.unique(arr, return_counts=True)
print(values)
plt.bar(values, dist)
# estimación de la distribución de probabilidad
mu = arr.mean()
#distribución teórica
sigma = arr.std()
dist = norm(mu, sigma)
x = np.arange(30,60,0.1)
y = [dist.pdf(value) for value in x]
plt.plot(x, y)
# datos
values, dist = np.unique(arr, return_counts=True)
plt.bar(values, dist/len(arr))
| _____no_output_____ | MIT | probability/probability-course/notebooks/[Clase9]Distribucion_normal.ipynb | Elkinmt19/data-science-dojo |
ANS -1 | df_1['diff_in_days'] = df_1['Cut Off Date'] - df_1['Borrower DOB (MM/DD/YYYY)']
df_1['diff_in_years'] = df_1["diff_in_days"] / timedelta(days=365)
avg_borrower_age = df_1.groupby('Product Group')['diff_in_years'].mean()
avg_borrower_age
df_1['orig_year'] = df_1['Origination Date'].dt.year
origination_year = df_1.groupby('Product Group').agg({'orig_year':min})
origination_year
total_accounts = df_1.groupby('Product Group').size().reset_index()
total_accounts.rename(columns={0:'Total Accounts'},inplace = True)
total_accounts
df_3 = pd.merge(df_1,df_2,on='LoanID',how='inner')
total_balances = df_3.groupby('Product Group').agg({'Origination Balance':sum,'Outstanding Balance':sum})
total_balances
innsured_loans = df_1.groupby('Product Group')['Insurance'].apply(lambda x: (x=='Y').sum()).reset_index(name='Insured Loans')
innsured_loans
max_maturity_date = df_1.groupby('Product Group').agg({'Loan MaturityDate':max})
df_4 = pd.merge(max_maturity_date,df_1,on=['Product Group','Loan MaturityDate'],how='inner')
loan_id_maturity = df_4.drop_duplicates(subset = ['Product Group', 'Loan MaturityDate'], keep = 'first').reset_index(drop = True)
loanID_max_maturity = loan_id_maturity[['Product Group','LoanID']]
loanID_max_maturity | _____no_output_____ | MIT | Equipped_AI_Test.ipynb | VAD3R-95/Hackathons_and_Interviews |
ANS -2 | df_test = [origination_year,innsured_loans,loanID_max_maturity,total_balances,total_accounts]
df_ans_2 = reduce(lambda left,right: pd.merge(left,right,on=['Product Group'],how='inner'), df_test)
df_ans_2 | _____no_output_____ | MIT | Equipped_AI_Test.ipynb | VAD3R-95/Hackathons_and_Interviews |
ANS -3 | max_originating_balance = df_1.groupby('Product Group').agg({'Origination Balance':max})
df_merged = pd.merge(max_originating_balance,df_1,on=['Product Group','Origination Balance'],how='inner')
loan_id_originating_balance = df_merged.drop_duplicates(subset = ['Product Group', 'Origination Balance'], keep = 'first').reset_index(drop = True)
loanID_max_originating_balance = loan_id_originating_balance[['Product Group','LoanID']]
loanID_max_originating_balance | _____no_output_____ | MIT | Equipped_AI_Test.ipynb | VAD3R-95/Hackathons_and_Interviews |
ANS -4 | df_ques3 = pd.merge(df_1,df_2,on='LoanID',how='inner')
df_ans_3 = df_ques3.groupby(['Product Group']).apply(lambda x: x['Outstanding Balance'].sum()/x['Origination Balance'].sum()).reset_index(name='Balance Ammortized')
df_ans_3 | _____no_output_____ | MIT | Equipped_AI_Test.ipynb | VAD3R-95/Hackathons_and_Interviews |
Transfer Learning Template | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform | _____no_output_____ | MIT | experiments/tl_1v2/cores-oracle.run1.framed/trials/14/trial.ipynb | stevester94/csc500-notebooks |
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean | required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:cores-oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "CORES_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment) | _____no_output_____ | MIT | experiments/tl_1v2/cores-oracle.run1.framed/trials/14/trial.ipynb | stevester94/csc500-notebooks |
Logistic Regression on 'HEART DISEASE' Dataset Elif Cansu YILDIZ | from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql.functions import col, countDistinct
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, MinMaxScaler, IndexToString
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator
spark = SparkSession\
.builder\
.appName("MachineLearningExample")\
.getOrCreate() | _____no_output_____ | MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
The dataset used is 'Heart Disease' dataset from Kaggle. You can get from this [link](https://www.kaggle.com/ronitf/heart-disease-uci). | df = spark.read.csv('datasets/heart.csv', header = True, inferSchema = True) #Kaggle Dataset
df.printSchema()
df.show(5) | root
|-- age: integer (nullable = true)
|-- sex: integer (nullable = true)
|-- cp: integer (nullable = true)
|-- trestbps: integer (nullable = true)
|-- chol: integer (nullable = true)
|-- fbs: integer (nullable = true)
|-- restecg: integer (nullable = true)
|-- thalach: integer (nullable = true)
|-- exang: integer (nullable = true)
|-- oldpeak: double (nullable = true)
|-- slope: integer (nullable = true)
|-- ca: integer (nullable = true)
|-- thal: integer (nullable = true)
|-- target: integer (nullable = true)
+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
|age|sex| cp|trestbps|chol|fbs|restecg|thalach|exang|oldpeak|slope| ca|thal|target|
+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
| 63| 1| 3| 145| 233| 1| 0| 150| 0| 2.3| 0| 0| 1| 1|
| 37| 1| 2| 130| 250| 0| 1| 187| 0| 3.5| 0| 0| 2| 1|
| 41| 0| 1| 130| 204| 0| 0| 172| 0| 1.4| 2| 0| 2| 1|
| 56| 1| 1| 120| 236| 0| 1| 178| 0| 0.8| 2| 0| 2| 1|
| 57| 0| 0| 120| 354| 0| 1| 163| 1| 0.6| 2| 0| 2| 1|
+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
only showing top 5 rows
| MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__HOW MANY DISTINCT VALUE DO COLUMNS HAVE?__ | df.agg(*(countDistinct(col(c)).alias(c) for c in df.columns)).show() | +---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
|age|sex| cp|trestbps|chol|fbs|restecg|thalach|exang|oldpeak|slope| ca|thal|target|
+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
| 41| 2| 4| 49| 152| 2| 3| 91| 2| 40| 3| 5| 4| 2|
+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+
| MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__SET the Label Column and Input Columns__ | labelColumn = "thal"
input_columns = [t[0] for t in df.dtypes if t[0]!=labelColumn]
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = df.randomSplit([0.7, 0.3])
print("total data count: ", df.count())
print("train data count: ", trainingData.count())
print("test data count: ", testData.count()) | total data count: 303
train data count: 218
test data count: 85
| MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__TRAINING__ | assembler = VectorAssembler(inputCols = input_columns, outputCol='features')
lr = LogisticRegression(featuresCol='features', labelCol=labelColumn,
maxIter=10, regParam=0.3, elasticNetParam=0.8)
stages = [assembler, lr]
partialPipeline = Pipeline().setStages(stages)
model = partialPipeline.fit(trainingData) | _____no_output_____ | MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__MAKE PREDICTIONS__ | predictions = model.transform(testData)
predictionss = predictions.select("probability", "rawPrediction", "prediction",
col(labelColumn).alias("label"))
predictionss[["probability", "prediction", "label"]].show(5, truncate=False) | +--------------------------------------------------------------------------------+----------+-----+
|probability |prediction|label|
+--------------------------------------------------------------------------------+----------+-----+
|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |
|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |3 |
|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |
|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |
|[0.012875234771605678,0.06656572644096996,0.5051698495258184,0.4153891892616059]|2.0 |3 |
+--------------------------------------------------------------------------------+----------+-----+
only showing top 5 rows
| MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__EVALUATION for Binary Classification__ | evaluator = BinaryClassificationEvaluator(labelCol="label", rawPredictionCol="prediction", metricName="areaUnderROC")
areaUnderROC = evaluator.evaluate(predictionss)
print("Area under ROC = %g" % areaUnderROC)
evaluator = BinaryClassificationEvaluator(labelCol="label", rawPredictionCol="prediction", metricName="areaUnderPR")
areaUnderPR = evaluator.evaluate(predictionss)
print("areaUnderPR = %g" % areaUnderPR) | _____no_output_____ | MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
__EVALUATION for Multiclass Classification__ | evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictionss)
print("accuracy = %g" % accuracy)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="f1")
f1 = evaluator.evaluate(predictionss)
print("f1 = %g" % f1)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedPrecision")
weightedPrecision = evaluator.evaluate(predictionss)
print("weightedPrecision = %g" % weightedPrecision)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedRecall")
weightedRecall = evaluator.evaluate(predictionss)
print("weightedRecall = %g" % weightedRecall) | accuracy = 0.564706
f1 = 0.407607
weightedPrecision = 0.318893
weightedRecall = 0.564706
| MIT | Spark/HeartDataset-MLlib.ipynb | elifcansuyildiz/MachineLearningNotebooks |
一个完整的机器学习项目 | import os
import tarfile
import urllib
import pandas as pd
import numpy as np
from CategoricalEncoder import CategoricalEncoder | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
下载数据集 | DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "../datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if os.path.isfile(housing_path + "/housing.tgz"):
return print("already download")
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data() | already download
| MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
加载数据集 | def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing_data = load_housing_data()
housing_data.head()
housing_data.info()
housing_data["ocean_proximity"].value_counts()
housing_data.describe() | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
绘图 | %matplotlib inline
import matplotlib.pyplot as plt
housing_data.hist(bins=50, figsize=(20, 15)) | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
创建测试集 | from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing_data, test_size=0.2, random_state=42)
housing = train_set.copy()
housing.plot(kind="scatter" , x="longitude", y="latitude", alpha= 0.3, s=housing[ "population" ]/100, label= "population", c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True) | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
皮尔逊相关系数因为数据集并不是非常大,你以很容易地使用 `corr()` 方法计算出每对属性间的标准相关系数(standard correlation coefficient,也称作皮尔逊相关系数。相关系数的范围是 -1 到 1。当接近 1 时,意味强正相关;例如,当收入中位数增加时,房价中位数也会增加。当相关系数接近 -1 时,意味强负相关;你可以看到,纬度和房价中位数有轻微的负相关性(即,越往北,房价越可能降低)。最后,相关系数接近 0,意味没有线性相关性。> 相关系数可能会完全忽略非线性关系 | corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False) | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
创建一些新的特征 | housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
housing["population_per_household"] = housing["population"] / housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False) | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
为机器学习准备数据所有的数据处理 __只能在训练集上进行__,不能使用测试集数据。 | housing = train_set.drop("median_house_value", axis=1)
housing_labels = train_set["median_house_value"].copy() | _____no_output_____ | MIT | sklearn-guide/chapter03/ml-3.ipynb | a630140621/machine-learning-course |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.