Datasets:

Languages:
English
License:
alkzar90 commited on
Commit
d86d1b0
1 Parent(s): 1da61c0

Add README

Browse files
Files changed (1) hide show
  1. README.md +191 -0
README.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Parlogs-Observations Dataset
2
+
3
+ ## Dataset Summary
4
+
5
+ Parlogs-Observations is a comprehensive dataset that includes the Very Large Telescope (VLT) logs for Template Execution of PIONIER, GRAVITY, and MATISSE instruments when they used Auxiliary Telescopes (ATs). It also encompasses all VLTI subsystems and ATs logs. This dataset aggregates logs based on instruments, time ranges, and subsystems, and contains template executions from 2019 in the VLTI infrastructure at Paranal. The dataset is formatted in single Parket files, which can be conveniently loaded, for example, with Pandas in Python.
6
+
7
+ Parlogs-Observations is publicly available at 🤗 Hugging Face Dataset.
8
+
9
+ ## Supported Tasks and Leaderboards
10
+
11
+ The `parlogs-observations` dataset is a resource for researchers and practitioners in astronomy, data analysis, and machine learning. It enables a wide range of tasks focused on enhancing the understanding and operation of the Very Large Telescope Interferometer (VLTI) infrastructure. The following tasks are supported by the dataset:
12
+
13
+ - **Anomaly Detection**: Users can identify unusual patterns or abnormal behavior in the log data that could indicate errors or bugs. This is crucial in providing operaional maintenance to the VLTI.
14
+
15
+ - **System Diagnosis**: The dataset allows for diagnosing system failures or performance issues. By analyzing error logs, trace logs, or event logs, researchers can pinpoint and address the root causes of various operational issues.
16
+
17
+ - **Performance Monitoring**: With this dataset, monitoring the performance of the VLTI systems becomes feasible. Users can track and analyze systems to understand resource usage, detect latency issues, or identify bottlenecks in the infrastructure.
18
+
19
+ - **Predictive Maintenance**: Leveraging the dataset for predictive maintenance helps in foreseeing system failures or issues before they occur. This is achieved by analyzing trends and patterns in the log data to implement timely interventions.
20
+
21
+ ## Overview
22
+
23
+ ### Observations at Paranal
24
+
25
+ At Paranal, the Very Large Telescope (VLT) is one of the world's most advanced optical telescopes, consisting of four Unit Telescopes and four movable Auxiliary Telescopes. Astronomical observations are configured into Observation Blocks (OBs), containing a sequence of Templates with parameters and scripts tailored to various scientific goals. Each template's execution follows a predictable behavior, allowing for detailed and systematic studies. The templates remain unchanged during a scientific period of six months, therefore the templates referred in parlogs-observations datasets can be considered as immutable source code.
26
+
27
+ ### Machine Learning Techniques for parlogs-observations
28
+
29
+ Given the structured nature of the dataset, various machine learning techniques can be applied to extract insights and build models for the tasks mentioned above. Some of these techniques include:
30
+
31
+ - **Clustering Algorithms**: Such as K-means and hierarchical clustering to group similar log messages or events and identify nested patterns in log data.
32
+
33
+ - **Classification Algorithms**: Including Support Vector Machines (SVM), Random Forests, and Naive Bayes classifiers for categorizing log messages and detecting anomalies.
34
+
35
+ - **Sequence Analysis and Pattern Recognition**: Utilizing Hidden Markov Models (HMMs) and Frequent Pattern Mining to model sequences of log messages or events and discover common patterns in logs.
36
+
37
+ - **Anomaly Detection Techniques**: Applying Isolation Forest and other advanced methods to identify outliers and anomalies in log data.
38
+
39
+ - **Natural Language Processing (NLP) Techniques**: Leveraging Topic Modeling and Word Embeddings to uncover thematic structures in log messages and transform text into meaningful numerical representations.
40
+
41
+ - **Deep Learning Techniques**: Employing Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNNs), Graph Neural Networks (GNNs), Transformers, and Autoencoders for sophisticated modeling and analysis of time-series log data.
42
+
43
+
44
+
45
+ ## Data Structure and Naming Conventions
46
+
47
+ The dataset is organized into Parket files follow a structured naming convention for easy identification and access based on the instrument, time range, and subsystems. This format ensures efficient data retrieval and manipulation, especially for large-scale data analysis:
48
+
49
+ ```
50
+ {INSTRUMENT}-{TIME_RANGE}-{CONTENT}.parket
51
+ ```
52
+
53
+
54
+ Where:
55
+ - `INSTRUMENT` can be PIONIER, GRAVITY, or MATISSE.
56
+ - `TIME_RANGE` is one of 1d, 1w, 1m, 6m.
57
+ - `CONTENT` can be meta, traces, traces-SUBSYSTEMS, or traces-TELESCOPES.
58
+
59
+ Example files:
60
+ - PIONIER-1w-meta.parket
61
+ - GRAVITY-1m-traces-SUBSYSTEMS.parket
62
+
63
+ The "meta" file includes information about the template execution, while "traces" files contain event logs.
64
+
65
+ The exisiting files are shown in the table below:
66
+
67
+
68
+ | GRAVITY | PIONIER | MATISSE |
69
+ |---------------------------|---------------------------|---------------------------|
70
+ | GRAVITY-1d-meta.parket | PIONIER-1d-meta.parket | MATISSE-1d-meta.parket |
71
+ | GRAVITY-1d-traces-SUBSYSTEMS.parket | PIONIER-1d-traces-SUBSYSTEMS.parket | MATISSE-1d-traces-SUBSYSTEMS.parket |
72
+ | GRAVITY-1d-traces-TELESCOPES.parket | PIONIER-1d-traces-TELESCOPES.parket | MATISSE-1d-traces-TELESCOPES.parket |
73
+ | GRAVITY-1d-traces.parket | PIONIER-1d-traces.parket | MATISSE-1d-traces.parket |
74
+ | GRAVITY-1m-meta.parket | PIONIER-1m-meta.parket | MATISSE-1m-meta.parket |
75
+ | GRAVITY-1m-traces-SUBSYSTEMS.parket | PIONIER-1m-traces-SUBSYSTEMS.parket | MATISSE-1m-traces-SUBSYSTEMS.parket |
76
+ | GRAVITY-1m-traces-TELESCOPES.parket | PIONIER-1m-traces-TELESCOPES.parket | MATISSE-1m-traces-TELESCOPES.parket |
77
+ | GRAVITY-1m-traces.parket | PIONIER-1m-traces.parket | MATISSE-1m-traces.parket |
78
+ | GRAVITY-1w-meta.parket | PIONIER-1w-meta.parket | MATISSE-1w-meta.parket |
79
+ | GRAVITY-1w-traces-SUBSYSTEMS.parket | PIONIER-1w-traces-SUBSYSTEMS.parket | MATISSE-1w-traces-SUBSYSTEMS.parket |
80
+ | GRAVITY-1w-traces-TELESCOPES.parket | PIONIER-1w-traces-TELESCOPES.parket | MATISSE-1w-traces-TELESCOPES.parket |
81
+ | GRAVITY-1w-traces.parket | PIONIER-1w-traces.parket | MATISSE-1w-traces.parket |
82
+ | GRAVITY-6m-meta.parket | PIONIER-6m-meta.parket | MATISSE-6m-meta.parket |
83
+ | GRAVITY-6m-traces-SUBSYSTEMS.parket | PIONIER-6m-traces-SUBSYSTEMS.parket | MATISSE-6m-traces-SUBSYSTEMS.parket |
84
+ | GRAVITY-6m-traces-TELESCOPES.parket | PIONIER-6m-traces-TELESCOPES.parket | MATISSE-6m-traces-TELESCOPES.parket |
85
+ | GRAVITY-6m-traces.parket | PIONIER-6m-traces.parket | MATISSE-6m-traces.parket |
86
+
87
+ ## Combining Files
88
+
89
+ Files from same instrument and within the same time range belong to the same trace_id. For instance, in the files:
90
+ - PIONIER-1w-meta.parket
91
+ - PIONIER-1w-traces.parket
92
+
93
+ The trace_id=10 in PIONIER-1w-traces.parket file corresponds to the id=10 in the meta file PIONIER-1w-meta.parket.
94
+
95
+ ## Data Instances
96
+
97
+ A typical entry in the dataset might look like this:
98
+
99
+ ```python
100
+ # File: PIONIER-1m-traces.parket
101
+ # Row: 12268
102
+ {
103
+ "@timestamp": 1554253173950,
104
+ "system": "PIONIER",
105
+ "hostname": "wpnr",
106
+ "loghost": "wpnr",
107
+ "logtype": "LOG",
108
+ "envname": "wpnr",
109
+ "procname": "pnoControl",
110
+ "procid": 208,
111
+ "module": "boss",
112
+ "keywname": "",
113
+ "keywvalue": "",
114
+ "keywmask": "",
115
+ "logtext": "Executing START command ...",
116
+ "trace_id": 49
117
+ }
118
+ ```
119
+
120
+ ## Data Fields
121
+
122
+ The dataset contains structured logs from software operations related to astronomical instruments. Each entry in the log provides detailed information regarding specific actions or events recorded by the system. Below is the description of each field in the log entries:
123
+
124
+ | Field | Description |
125
+ |-------------|---------------------------------------------------------------------------------------------------|
126
+ | @timestamp | The timestamp of the log entry in milliseconds. |
127
+ | system | The name of the system (e.g., PIONIER) from which the log entry originates. |
128
+ | hostname | The hostname of the machine where the log entry was generated. |
129
+ | loghost | The host of the logging system that generated the entry. |
130
+ | logtype | Type of the log entry (e.g., LOG, FEVT, ERR), indicating its nature such as general log, event, or error. |
131
+ | envname | The environment name where the log was generated, providing context for the log entry. |
132
+ | procname | The name of the process that generated the log entry. |
133
+ | procid | The process ID associated with the log entry. |
134
+ | module | The module from which the log entry originated, indicating the specific part of the system. |
135
+ | keywname | Name of any keyword associated with the log entry, if applicable. It is always paired with keywvalue |
136
+ | keywvalue | Value of the keyword mentioned in `keywname`, if applicable. |
137
+ | keywmask | Mask or additional context for the keyword, if applicable. |
138
+ | logtext | The actual text of the log entry, providing detailed information about the event or action. |
139
+ | trace_id | A unique identifier associated with each log entry, corresponds to id in metadata table. |
140
+
141
+
142
+ ## Dataset Metadata
143
+
144
+ Each Parket file contains metadata regarding its contents, which includes details about the instrument used, time range, and types of logs stored. This is the format of a sample template execution in the metadata:
145
+
146
+ ```python
147
+ # File: PIONIER-1m-meta.parket
148
+ # Row: 49
149
+ {
150
+ "START": "2019-04-03 00:59:33.005000",
151
+ "END": "2019-04-03 01:01:25.719000",
152
+ "TIMEOUT": false,
153
+ "system": "PIONIER",
154
+ "procname": "bob_ins",
155
+ "TPL_ID": "PIONIER_obs_calibrator",
156
+ "ERROR": false,
157
+ "Aborted": false,
158
+ "SECONDS": 112.0,
159
+ "TEL": "AT"
160
+ }
161
+ ```
162
+
163
+ Where the fields are:
164
+
165
+ | Field | Comment |
166
+ | --------- | -------------------------------------------------------- |
167
+ | START | The start timestamp of the template execution in milliseconds |
168
+ | END | The end timestamp of the template execution in milliseconds |
169
+ | TIMEOUT | Indicates if the execution exceeded a predefined time limit |
170
+ | system | The name of the system used (e.g., PIONIER) |
171
+ | procname | The process name associated with the template execution |
172
+ | TPL_ID | The filename of the corresponding template file |
173
+ | ERROR | Indicates if there was an error during execution |
174
+ | Aborted | Indicates if the template execution was aborted (manually or because an error) |
175
+ | SECONDS | The duration of the template execution in seconds |
176
+ | TEL | The class of telescope used in the observation, in this dataset it is only AT |
177
+
178
+
179
+ This structured format ensures a comprehensive understanding of each template's execution, providing insights into the operational dynamics of astronomical observations at Paranal.
180
+
181
+
182
+
183
+ ## Loading Data
184
+
185
+ The dataset can be loaded using Python libraries like Pandas. Here's an example of how to load a Parket file:
186
+
187
+ ```python
188
+ import pandas as pd
189
+
190
+ df = pd.read_parket('PIONIER-1w-meta.parket')
191
+ ```