Datasets:
patrickfleith
commited on
Commit
•
5076167
1
Parent(s):
dd6385e
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,50 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
1 |
+
# Dataset Card for Dataset Name
|
2 |
+
|
3 |
+
## Dataset Description
|
4 |
+
|
5 |
+
Cite the dataset as:
|
6 |
+
|
7 |
+
Patrick Fleith. (2023). Controlled Anomalies Time Series (CATS) Dataset (Version 2) [Data set]. Solenix Engineering GmbH. https://doi.org/10.5281/zenodo.8338435
|
8 |
+
|
9 |
+
### Dataset Summary
|
10 |
+
|
11 |
+
The Controlled Anomalies Time Series (CATS) Dataset consists of commands, external stimuli, and telemetry readings of a simulated complex dynamical system with **200 injected anomalies.**
|
12 |
+
|
13 |
+
The CATS Dataset exhibits a set of desirable properties that make it very suitable for benchmarking Anomaly Detection Algorithms in Multivariate Time Series [1]:
|
14 |
+
|
15 |
+
### Supported Tasks and Leaderboards
|
16 |
+
|
17 |
+
Anomaly Detection in Multivariate Time Series
|
18 |
+
|
19 |
+
|
20 |
+
## Dataset Structure
|
21 |
+
|
22 |
+
- **Multivariate (17 variables) including sensors reading and control signals.** It simulates the operational behaviour of an arbitrary complex system including:
|
23 |
+
- **4 Deliberate Actuations / Control Commands sent by a simulated operator / controller**, for instance, commands of an operator to turn ON/OFF some equipment.
|
24 |
+
- **3 Environmental Stimuli / External Forces** acting on the system and affecting its behaviour, for instance, the wind affecting the orientation of a large ground antenna.
|
25 |
+
- **10 Telemetry Readings** representing the observable states of the complex system by means of sensors, for instance, a position, a temperature, a pressure, a voltage, current, humidity, velocity, acceleration, etc.
|
26 |
+
- **5 million timestamps**. Sensors readings are at 1Hz sampling frequency.
|
27 |
+
- **1 million nominal observations** (the first 1 million datapoints). This is suitable to start learning the "normal" behaviour.
|
28 |
+
- **4 million observations** that include both nominal and anomalous segments. This is suitable to evaluate both semi-supervised approaches (novelty detection) as well as unsupervised approaches (outlier detection).
|
29 |
+
|
30 |
+
- **200 anomalous segments**. One anomalous segment may contain several successive anomalous observations / timestamps. Only the last 4 million observations contain anomalous segments.
|
31 |
+
- **Different types of anomalies** to understand what anomaly types can be detected by different approaches. The categories are available in the dataset and in the metadata.
|
32 |
+
- **Fine control over ground truth**. As this is a simulated system with deliberate anomaly injection, the start and end time of the anomalous behaviour is known very precisely. In contrast to real world datasets, there is no risk that the ground truth contains mislabelled segments which is often the case for real data.Suitable for root cause analysis. In addition to the anomaly category, the time series channel in which the anomaly first developed itself is recorded and made available as part of the metadata. This can be useful to evaluate the performance of algorithm to trace back anomalies to the right root cause channel.
|
33 |
+
- **Affected channels**. In addition to the knowledge of the root cause channel in which the anomaly first developed itself, we provide information of channels possibly affected by the anomaly. This can also be useful to evaluate the explainability of anomaly detection systems which may point out to the anomalous channels (root cause and affected).
|
34 |
+
- **Obvious anomalies.** The simulated anomalies have been designed to be "easy" to be detected for human eyes (i.e., there are very large spikes or oscillations), hence also detectable for most algorithms. It makes this synthetic dataset useful for screening tasks (i.e., to eliminate algorithms that are not capable to detect those obvious anomalies). However, during our initial experiments, the dataset turned out to be challenging enough even for state-of-the-art anomaly detection approaches, making it suitable also for regular benchmark studies.
|
35 |
+
- **Context provided**. Some variables can only be considered anomalous in relation to other behaviours. A typical example consists of a light and switch pair. The light being either on or off is nominal, the same goes for the switch, but having the switch on and the light off shall be considered anomalous. In the CATS dataset, users can choose (or not) to use the available context, and external stimuli, to test the usefulness of the context for detecting anomalies in this simulation.
|
36 |
+
- **Pure signal ideal for robustness-to-noise analysis**. The simulated signals are provided without noise: while this may seem unrealistic at first, it is an advantage since users of the dataset can decide to add on top of the provided series any type of noise and choose an amplitude. This makes it well suited to test how sensitive and robust detection algorithms are against various levels of noise.
|
37 |
+
- **No missing data**. You can drop whatever data you want to assess the impact of missing values on your detector with respect to a clean baseline.
|
38 |
+
|
39 |
+
### Data Splits
|
40 |
+
|
41 |
+
- The first 1 million points are nominal (no occurence of anomalies)
|
42 |
+
- The next 4 million points include both nominal and anomalous segments.
|
43 |
+
|
44 |
+
### Licensing Information
|
45 |
+
|
46 |
license: cc-by-4.0
|
47 |
+
|
48 |
+
### Citation Information
|
49 |
+
|
50 |
+
Patrick Fleith. (2023). Controlled Anomalies Time Series (CATS) Dataset (Version 1) [Data set]. Solenix Engineering GmbH. https://doi.org/10.5281/zenodo.7646897
|