text stringlengths 0 4.99k |
|---|
x = data[:, 1:] |
return x, y.astype(int) |
root_url = \"https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/\" |
x_train, y_train = readucr(root_url + \"FordA_TRAIN.tsv\") |
x_test, y_test = readucr(root_url + \"FordA_TEST.tsv\") |
Visualize the data |
Here we visualize one timeseries example for each class in the dataset. |
classes = np.unique(np.concatenate((y_train, y_test), axis=0)) |
plt.figure() |
for c in classes: |
c_x_train = x_train[y_train == c] |
plt.plot(c_x_train[0], label=\"class \" + str(c)) |
plt.legend(loc=\"best\") |
plt.show() |
plt.close() |
png |
Standardize the data |
Our timeseries are already in a single length (500). However, their values are usually in various ranges. This is not ideal for a neural network; in general we should seek to make the input values normalized. For this specific dataset, the data is already z-normalized: each timeseries sample has a mean equal to zero and a standard deviation equal to one. This type of normalization is very common for timeseries classification problems, see Bagnall et al. (2016). |
Note that the timeseries data used here are univariate, meaning we only have one channel per timeseries example. We will therefore transform the timeseries into a multivariate one with one channel using a simple reshaping via numpy. This will allow us to construct a model that is easily applicable to multivariate time series. |
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1)) |
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1)) |
Finally, in order to use sparse_categorical_crossentropy, we will have to count the number of classes beforehand. |
num_classes = len(np.unique(y_train)) |
Now we shuffle the training set because we will be using the validation_split option later when training. |
idx = np.random.permutation(len(x_train)) |
x_train = x_train[idx] |
y_train = y_train[idx] |
Standardize the labels to positive integers. The expected labels will then be 0 and 1. |
y_train[y_train == -1] = 0 |
y_test[y_test == -1] = 0 |
Build a model |
We build a Fully Convolutional Neural Network originally proposed in this paper. The implementation is based on the TF 2 version provided here. The following hyperparameters (kernel_size, filters, the usage of BatchNorm) were found via random search using KerasTuner. |
def make_model(input_shape): |
input_layer = keras.layers.Input(input_shape) |
conv1 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(input_layer) |
conv1 = keras.layers.BatchNormalization()(conv1) |
conv1 = keras.layers.ReLU()(conv1) |
conv2 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(conv1) |
conv2 = keras.layers.BatchNormalization()(conv2) |
conv2 = keras.layers.ReLU()(conv2) |
conv3 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(conv2) |
conv3 = keras.layers.BatchNormalization()(conv3) |
conv3 = keras.layers.ReLU()(conv3) |
gap = keras.layers.GlobalAveragePooling1D()(conv3) |
output_layer = keras.layers.Dense(num_classes, activation=\"softmax\")(gap) |
return keras.models.Model(inputs=input_layer, outputs=output_layer) |
model = make_model(input_shape=x_train.shape[1:]) |
keras.utils.plot_model(model, show_shapes=True) |
('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.') |
Train the model |
epochs = 500 |
batch_size = 32 |
callbacks = [ |
keras.callbacks.ModelCheckpoint( |
\"best_model.h5\", save_best_only=True, monitor=\"val_loss\" |
), |
keras.callbacks.ReduceLROnPlateau( |
monitor=\"val_loss\", factor=0.5, patience=20, min_lr=0.0001 |
), |
keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=50, verbose=1), |
] |
model.compile( |
optimizer=\"adam\", |
loss=\"sparse_categorical_crossentropy\", |
metrics=[\"sparse_categorical_accuracy\"], |
) |
history = model.fit( |
x_train, |
y_train, |
batch_size=batch_size, |
epochs=epochs, |
callbacks=callbacks, |
validation_split=0.2, |
verbose=1, |
) |
Epoch 1/500 |
90/90 [==============================] - 1s 8ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7017 - val_loss: 0.7335 - val_sparse_categorical_accuracy: 0.4882 |
Epoch 2/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.4520 - sparse_categorical_accuracy: 0.7729 - val_loss: 0.7446 - val_sparse_categorical_accuracy: 0.4882 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.