text stringlengths 0 4.99k |
|---|
Total params: 9,505 |
Trainable params: 9,505 |
Non-trainable params: 0 |
_________________________________________________________________ |
Train the model |
Please note that we are using x_train as both the input and the target since this is a reconstruction model. |
history = model.fit( |
x_train, |
x_train, |
epochs=50, |
batch_size=128, |
validation_split=0.1, |
callbacks=[ |
keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=5, mode=\"min\") |
], |
) |
Epoch 1/50 |
27/27 [==============================] - 2s 35ms/step - loss: 0.5868 - val_loss: 0.1225 |
Epoch 2/50 |
27/27 [==============================] - 1s 29ms/step - loss: 0.0882 - val_loss: 0.0404 |
Epoch 3/50 |
27/27 [==============================] - 1s 30ms/step - loss: 0.0594 - val_loss: 0.0359 |
Epoch 4/50 |
27/27 [==============================] - 1s 29ms/step - loss: 0.0486 - val_loss: 0.0287 |
Epoch 5/50 |
27/27 [==============================] - 1s 30ms/step - loss: 0.0398 - val_loss: 0.0231 |
Epoch 6/50 |
27/27 [==============================] - 1s 31ms/step - loss: 0.0337 - val_loss: 0.0208 |
Epoch 7/50 |
27/27 [==============================] - 1s 31ms/step - loss: 0.0299 - val_loss: 0.0182 |
Epoch 8/50 |
27/27 [==============================] - 1s 31ms/step - loss: 0.0271 - val_loss: 0.0187 |
Epoch 9/50 |
27/27 [==============================] - 1s 32ms/step - loss: 0.0251 - val_loss: 0.0190 |
Epoch 10/50 |
27/27 [==============================] - 1s 31ms/step - loss: 0.0235 - val_loss: 0.0179 |
Epoch 11/50 |
27/27 [==============================] - 1s 32ms/step - loss: 0.0224 - val_loss: 0.0189 |
Epoch 12/50 |
27/27 [==============================] - 1s 33ms/step - loss: 0.0214 - val_loss: 0.0199 |
Epoch 13/50 |
27/27 [==============================] - 1s 33ms/step - loss: 0.0206 - val_loss: 0.0194 |
Epoch 14/50 |
27/27 [==============================] - 1s 32ms/step - loss: 0.0199 - val_loss: 0.0208 |
Epoch 15/50 |
27/27 [==============================] - 1s 35ms/step - loss: 0.0192 - val_loss: 0.0204 |
Let's plot training and validation loss to see how the training went. |
plt.plot(history.history[\"loss\"], label=\"Training Loss\") |
plt.plot(history.history[\"val_loss\"], label=\"Validation Loss\") |
plt.legend() |
plt.show() |
png |
Detecting anomalies |
We will detect anomalies by determining how well our model can reconstruct the input data. |
Find MAE loss on training samples. |
Find max MAE loss value. This is the worst our model has performed trying to reconstruct a sample. We will make this the threshold for anomaly detection. |
If the reconstruction loss for a sample is greater than this threshold value then we can infer that the model is seeing a pattern that it isn't familiar with. We will label this sample as an anomaly. |
# Get train MAE loss. |
x_train_pred = model.predict(x_train) |
train_mae_loss = np.mean(np.abs(x_train_pred - x_train), axis=1) |
plt.hist(train_mae_loss, bins=50) |
plt.xlabel(\"Train MAE loss\") |
plt.ylabel(\"No of samples\") |
plt.show() |
# Get reconstruction loss threshold. |
threshold = np.max(train_mae_loss) |
print(\"Reconstruction error threshold: \", threshold) |
png |
Reconstruction error threshold: 0.1195600905852785 |
Compare recontruction |
Just for fun, let's see how our model has recontructed the first sample. This is the 288 timesteps from day 1 of our training dataset. |
# Checking how the first sequence is learnt |
plt.plot(x_train[0]) |
plt.plot(x_train_pred[0]) |
plt.show() |
png |
Prepare test data |
df_test_value = (df_daily_jumpsup - training_mean) / training_std |
fig, ax = plt.subplots() |
df_test_value.plot(legend=False, ax=ax) |
plt.show() |
# Create sequences from test values. |
x_test = create_sequences(df_test_value.values) |
print(\"Test input shape: \", x_test.shape) |
# Get test MAE loss. |
x_test_pred = model.predict(x_test) |
test_mae_loss = np.mean(np.abs(x_test_pred - x_test), axis=1) |
test_mae_loss = test_mae_loss.reshape((-1)) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.