quakeflow_nc / README.md
zhuwq0's picture
Update README.md
bef61de
|
raw
history blame
No virus
7.58 kB
---
license: mit
---
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
```
Group: / len:16227
|- Group: /nc71111584 len:2
| |-* begin_time = 2020-01-02T07:01:19.620
| |-* depth_km = 3.69
| |-* end_time = 2020-01-02T07:03:19.620
| |-* event_id = nc71111584
| |-* event_time = 2020-01-02T07:01:48.240
| |-* event_time_index = 2862
| |-* latitude = 37.6545
| |-* longitude = -118.8798
| |-* magnitude = -0.15
| |-* magnitude_type = D
| |-* num_stations = 2
| |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
| | | |-* distance_km = 1.9
| | | |-* dt_s = 0.01
| | | |-* elevation_m = 2391.0
| | | |-* emergence_angle = 159.0
| | | |-* event_id = ['nc71111584' 'nc71111584']
| | | |-* latitude = 37.6444
| | | |-* location =
| | | |-* longitude = -118.8968
| | | |-* network = NC
| | | |-* phase_index = [3000 3101]
| | | |-* phase_polarity = ['U' 'N']
| | | |-* phase_remark = ['IP' 'ES']
| | | |-* phase_score = [1 2]
| | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
| | | |-* phase_type = ['P' 'S']
| | | |-* snr = [2.82143 3.055604 1.8412642]
| | | |-* station = MCB
| | | |-* unit = 1e-6m/s
| |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
......
```
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of `station` is a dictionary with the following keys:
- `data`: the waveform with shape `(3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
The sample of `event` is a dictionary with the following keys:
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
- `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
- `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time
# to load "station_test" with test split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# to load "event" with train split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
```
#### Usage for `station`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting when using iterable dataset
# if you want to use dataset directly, just use
# quakeflow_nc.with_format("torch")
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```
#### Usage for `event`
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```