text
stringlengths
0
4.99k
2014-04-01 00:05:00 21.970327
2014-04-01 00:10:00 18.624806
2014-04-01 00:15:00 21.953684
2014-04-01 00:20:00 21.909120
value
timestamp
2014-04-01 00:00:00 19.761252
2014-04-01 00:05:00 20.500833
2014-04-01 00:10:00 19.961641
2014-04-01 00:15:00 21.490266
2014-04-01 00:20:00 20.187739
Visualize the data
Timeseries data without anomalies
We will use the following data for training.
fig, ax = plt.subplots()
df_small_noise.plot(legend=False, ax=ax)
plt.show()
png
Timeseries data with anomalies
We will use the following data for testing and see if the sudden jump up in the data is detected as an anomaly.
fig, ax = plt.subplots()
df_daily_jumpsup.plot(legend=False, ax=ax)
plt.show()
png
Prepare training data
Get data values from the training timeseries data file and normalize the value data. We have a value for every 5 mins for 14 days.
24 * 60 / 5 = 288 timesteps per day
288 * 14 = 4032 data points in total
# Normalize and save the mean and std we get,
# for normalizing test data.
training_mean = df_small_noise.mean()
training_std = df_small_noise.std()
df_training_value = (df_small_noise - training_mean) / training_std
print(\"Number of training samples:\", len(df_training_value))
Number of training samples: 4032
Create sequences
Create sequences combining TIME_STEPS contiguous data values from the training data.
TIME_STEPS = 288
# Generated training sequences for use in the model.
def create_sequences(values, time_steps=TIME_STEPS):
output = []
for i in range(len(values) - time_steps + 1):
output.append(values[i : (i + time_steps)])
return np.stack(output)
x_train = create_sequences(df_training_value.values)
print(\"Training input shape: \", x_train.shape)
Training input shape: (3745, 288, 1)
Build a model
We will build a convolutional reconstruction autoencoder model. The model will take input of shape (batch_size, sequence_length, num_features) and return output of the same shape. In this case, sequence_length is 288 and num_features is 1.
model = keras.Sequential(
[
layers.Input(shape=(x_train.shape[1], x_train.shape[2])),
layers.Conv1D(
filters=32, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\"
),
layers.Dropout(rate=0.2),
layers.Conv1D(
filters=16, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\"
),
layers.Conv1DTranspose(
filters=16, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\"
),
layers.Dropout(rate=0.2),
layers.Conv1DTranspose(
filters=32, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\"
),
layers.Conv1DTranspose(filters=1, kernel_size=7, padding=\"same\"),
]
)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss=\"mse\")
model.summary()
WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of [`keras.Input`](/api/layers/core_layers/input#input-function) to Sequential model. [`keras.Input`](/api/layers/core_layers/input#input-function) is intended to be used by Functional model.
Model: \"sequential\"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 144, 32) 256
_________________________________________________________________
dropout (Dropout) (None, 144, 32) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 72, 16) 3600
_________________________________________________________________
conv1d_transpose (Conv1DTran (None, 144, 16) 1808
_________________________________________________________________
dropout_1 (Dropout) (None, 144, 16) 0
_________________________________________________________________
conv1d_transpose_1 (Conv1DTr (None, 288, 32) 3616
_________________________________________________________________
conv1d_transpose_2 (Conv1DTr (None, 288, 1) 225
=================================================================