Datasets:

DOI:
License:
quakeflow_nc / README.md
zhuwq0's picture
add batched configuration (#3)
2fa7223
|
raw
history blame
8.52 kB
metadata
license: mit

Quakeflow_NC

Introduction

This dataset is part of the data from NCEDC (Northern California Earthquake Data Center) and is organized as several HDF5 files. The dataset structure is shown below: (File ncedc_event_dataset_000.h5.txt shows the structure of the firsr shard of the dataset, and you can find more information about the format at AI4EPS)

Cite the NCEDC: "NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC."

Acknowledge the NCEDC: "Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC."

Group: / len:10000
  |- Group: /nc100012 len:5
  |  |-* begin_time = 1987-05-08T00:15:48.890
  |  |-* depth_km = 7.04
  |  |-* end_time = 1987-05-08T00:17:48.890
  |  |-* event_id = nc100012
  |  |-* event_time = 1987-05-08T00:16:14.700
  |  |-* event_time_index = 2581
  |  |-* latitude = 37.5423
  |  |-* longitude = -118.4412
  |  |-* magnitude = 1.1
  |  |-* magnitude_type = D
  |  |-* num_stations = 5
  |  |- Dataset: /nc100012/NC.MRS..EH (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 265.0
  |  |  |  |-* component = ['Z']
  |  |  |  |-* distance_km = 39.1
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 3680.0
  |  |  |  |-* emergence_angle = 93.0
  |  |  |  |-* event_id = ['nc100012' 'nc100012']
  |  |  |  |-* latitude = 37.5107
  |  |  |  |-* location = 
  |  |  |  |-* longitude = -118.8822
  |  |  |  |-* network = NC
  |  |  |  |-* phase_index = [3274 3802]
  |  |  |  |-* phase_polarity = ['U' 'N']
  |  |  |  |-* phase_remark = ['IP' 'S']
  |  |  |  |-* phase_score = [1 1]
  |  |  |  |-* phase_time = ['1987-05-08T00:16:21.630' '1987-05-08T00:16:26.920']
  |  |  |  |-* phase_type = ['P' 'S']
  |  |  |  |-* snr = [0.         0.         1.98844361]
  |  |  |  |-* station = MRS
  |  |  |  |-* unit = 1e-6m/s
  |  |- Dataset: /nc100012/NN.BEN.N1.EH (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 329.0
  |  |  |  |-* component = ['Z']
  |  |  |  |-* distance_km = 22.5
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 2476.0
  |  |  |  |-* emergence_angle = 102.0
  |  |  |  |-* event_id = ['nc100012' 'nc100012']
  |  |  |  |-* latitude = 37.7154
  |  |  |  |-* location = N1
  |  |  |  |-* longitude = -118.5741
  |  |  |  |-* network = NN
  |  |  |  |-* phase_index = [3010 3330]
  |  |  |  |-* phase_polarity = ['U' 'N']
  |  |  |  |-* phase_remark = ['IP' 'S']
  |  |  |  |-* phase_score = [0 0]
  |  |  |  |-* phase_time = ['1987-05-08T00:16:18.990' '1987-05-08T00:16:22.190']
  |  |  |  |-* phase_type = ['P' 'S']
  |  |  |  |-* snr = [0.         0.         7.31356192]
  |  |  |  |-* station = BEN
  |  |  |  |-* unit = 1e-6m/s
  ......

How to use

Requirements

  • datasets
  • h5py
  • torch (for PyTorch)

Usage

Import the necessary packages:

import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset

We have 2 configurations for the dataset: NCEDC and NCEDC_full_size. They all return event-based samples one by one. But NCEDC returns samples with 10 stations each, while NCEDC_full_size return samples with stations same as the original data.

The sample of NCEDC is a dictionary with the following keys:

  • waveform: the waveform with shape (3, nt, n_sta), the first dimension is 3 components, the second dimension is the number of time samples, the third dimension is the number of stations
  • phase_pick: the probability of the phase pick with shape (3, nt, n_sta), the first dimension is noise, P and S
  • event_location: the event location with shape (4,), including latitude, longitude, depth and time
  • station_location: the station location with shape (n_sta, 3), the first dimension is latitude, longitude and depth

Because Huggingface datasets only support dynamic size on first dimension, so the sample of NCEDC_full_size is a dictionary with the following keys:

  • waveform: the waveform with shape (n_sta, 3, nt),
  • phase_pick: the probability of the phase pick with shape (n_sta, 3, nt)
  • event_location: the event location with shape (4,)
  • station_location: the station location with shape (n_sta, 3), the first dimension is latitude, longitude and depth

The default configuration is NCEDC. You can specify the configuration by argument name. For example:

# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time

# to load "NCEDC"
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="train")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")

# to load "NCEDC_full_size"
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC_full_size", split="train")

If you want to use the first several shards of the dataset, you can download the script quakeflow_nc.py and change the code as below:

# change the 37 to the number of shards you want
_URLS = {
    "NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
}

Then you can use the dataset like this (Don't forget to specify the argument name):

# don't forget to specify the script path
quakeflow_nc = datasets.load_dataset("path_to_script/quakeflow_nc.py", split="train")
quakeflow_nc

Usage for NCEDC

Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:

quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
quakeflow_nc = quakeflow_nc.to_iterable_dataset()
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
    isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
    raise Exception("quakeflow_nc is not an IterableDataset")

# print the first sample of the iterable dataset
for example in quakeflow_nc:
    print("\nIterable test\n")
    print(example.keys())
    for key in example.keys():
        print(key, example[key].shape, example[key].dtype)
    break

dataloader = DataLoader(quakeflow_nc, batch_size=4)

for batch in dataloader:
    print("\nDataloader test\n")
    print(batch.keys())
    for key in batch.keys():
        print(key, batch[key].shape, batch[key].dtype)
    break

Usage for NCEDC_full_size

Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):

quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="train", name="NCEDC_full_size")

# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
def reorder_keys(example):
    example["waveform"] = example["waveform"].permute(1,2,0).contiguous()
    example["phase_pick"] = example["phase_pick"].permute(1,2,0).contiguous()
    return example
    
quakeflow_nc = quakeflow_nc.map(reorder_keys)

try:
    isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
    raise Exception("quakeflow_nc is not an IterableDataset")

data_loader = DataLoader(
    quakeflow_nc,
    batch_size=1,
    num_workers=num_workers,
)

for batch in quakeflow_nc:
    print("\nIterable test\n")
    print(batch.keys())
    for key in batch.keys():
        print(key, batch[key].shape, batch[key].dtype)
    break

for batch in data_loader:
    print("\nDataloader test\n")
    print(batch.keys())
    for key in batch.keys():
        batch[key] = batch[key].squeeze(0)
        print(key, batch[key].shape, batch[key].dtype)
    break