text stringlengths 0 4.99k |
|---|
plt.hist(test_mae_loss, bins=50) |
plt.xlabel(\"test MAE loss\") |
plt.ylabel(\"No of samples\") |
plt.show() |
# Detect all the samples which are anomalies. |
anomalies = test_mae_loss > threshold |
print(\"Number of anomaly samples: \", np.sum(anomalies)) |
print(\"Indices of anomaly samples: \", np.where(anomalies)) |
png |
Test input shape: (3745, 288, 1) |
png |
Number of anomaly samples: 399 |
Indices of anomaly samples: (array([ 789, 1653, 1654, 1941, 2697, 2702, 2703, 2704, 2705, 2706, 2707, |
2708, 2709, 2710, 2711, 2712, 2713, 2714, 2715, 2716, 2717, 2718, |
2719, 2720, 2721, 2722, 2723, 2724, 2725, 2726, 2727, 2728, 2729, |
2730, 2731, 2732, 2733, 2734, 2735, 2736, 2737, 2738, 2739, 2740, |
2741, 2742, 2743, 2744, 2745, 2746, 2747, 2748, 2749, 2750, 2751, |
2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759, 2760, 2761, 2762, |
2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771, 2772, 2773, |
2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783, 2784, |
2785, 2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795, |
2796, 2797, 2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, |
2807, 2808, 2809, 2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, |
2818, 2819, 2820, 2821, 2822, 2823, 2824, 2825, 2826, 2827, 2828, |
2829, 2830, 2831, 2832, 2833, 2834, 2835, 2836, 2837, 2838, 2839, |
2840, 2841, 2842, 2843, 2844, 2845, 2846, 2847, 2848, 2849, 2850, |
2851, 2852, 2853, 2854, 2855, 2856, 2857, 2858, 2859, 2860, 2861, |
2862, 2863, 2864, 2865, 2866, 2867, 2868, 2869, 2870, 2871, 2872, |
2873, 2874, 2875, 2876, 2877, 2878, 2879, 2880, 2881, 2882, 2883, |
2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891, 2892, 2893, 2894, |
2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903, 2904, 2905, |
2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915, 2916, |
2917, 2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927, |
2928, 2929, 2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, |
2939, 2940, 2941, 2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, |
2950, 2951, 2952, 2953, 2954, 2955, 2956, 2957, 2958, 2959, 2960, |
2961, 2962, 2963, 2964, 2965, 2966, 2967, 2968, 2969, 2970, 2971, |
2972, 2973, 2974, 2975, 2976, 2977, 2978, 2979, 2980, 2981, 2982, |
2983, 2984, 2985, 2986, 2987, 2988, 2989, 2990, 2991, 2992, 2993, |
2994, 2995, 2996, 2997, 2998, 2999, 3000, 3001, 3002, 3003, 3004, |
3005, 3006, 3007, 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, |
3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, 3024, 3025, 3026, |
3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037, |
3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047, 3048, |
3049, 3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059, |
3060, 3061, 3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, |
3071, 3072, 3073, 3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, |
3082, 3083, 3084, 3085, 3086, 3087, 3088, 3089, 3090, 3091, 3092, |
3093, 3094, 3095]),) |
Plot anomalies |
We now know the samples of the data which are anomalies. With this, we will find the corresponding timestamps from the original test data. We will be using the following method to do that: |
Let's say time_steps = 3 and we have 10 training values. Our x_train will look like this: |
0, 1, 2 |
1, 2, 3 |
2, 3, 4 |
3, 4, 5 |
4, 5, 6 |
5, 6, 7 |
6, 7, 8 |
7, 8, 9 |
All except the initial and the final time_steps-1 data values, will appear in time_steps number of samples. So, if we know that the samples [(3, 4, 5), (4, 5, 6), (5, 6, 7)] are anomalies, we can say that the data point 5 is an anomaly. |
# data i is an anomaly if samples [(i - timesteps + 1) to (i)] are anomalies |
anomalous_data_indices = [] |
for data_idx in range(TIME_STEPS - 1, len(df_test_value) - TIME_STEPS + 1): |
if np.all(anomalies[data_idx - TIME_STEPS + 1 : data_idx]): |
anomalous_data_indices.append(data_idx) |
Let's overlay the anomalies on the original test data plot. |
df_subset = df_daily_jumpsup.iloc[anomalous_data_indices] |
fig, ax = plt.subplots() |
df_daily_jumpsup.plot(legend=False, ax=ax) |
df_subset.plot(legend=False, ax=ax, color=\"r\") |
plt.show() |
png |
Training a timeseries classifier from scratch on the FordA dataset from the UCR/UEA archive. |
Introduction |
This example shows how to do timeseries classification from scratch, starting from raw CSV timeseries files on disk. We demonstrate the workflow on the FordA dataset from the UCR/UEA archive. |
Setup |
from tensorflow import keras |
import numpy as np |
import matplotlib.pyplot as plt |
Load the data: the FordA dataset |
Dataset description |
The dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task. The full description of this dataset can be found here. |
Read the TSV data |
We will use the FordA_TRAIN file for training and the FordA_TEST file for testing. The simplicity of this dataset allows us to demonstrate effectively how to use ConvNets for timeseries classification. In this file, the first column corresponds to the label. |
def readucr(filename): |
data = np.loadtxt(filename, delimiter=\"\t\") |
y = data[:, 0] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.