Datasets:

bruAristimunha commited on
Commit
efef8a3
·
verified ·
1 Parent(s): ce5f44f

Metadata stub for nm000133

Browse files
Files changed (2) hide show
  1. README.md +80 -0
  2. eegdash.json +17 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: "Alljoined1"
3
+ license: other
4
+ tags:
5
+ - eeg
6
+ - neuroscience
7
+ - eegdash
8
+ - brain-computer-interface
9
+ - pytorch
10
+ size_categories:
11
+ - n<1K
12
+ task_categories:
13
+ - other
14
+ ---
15
+
16
+ # Alljoined1
17
+
18
+ **Dataset ID:** `nm000133`
19
+
20
+ _Xu2024_
21
+
22
+ **Canonical aliases:** `Alljoined1` · `Alljoined`
23
+
24
+ > **At a glance:** EEG · 8 subjects · 13 recordings · CC-BY-NC-ND-4.0
25
+
26
+ ## Load this dataset
27
+
28
+ This repo is a **pointer**. The raw EEG data lives at its canonical source
29
+ (OpenNeuro / NEMAR); [EEGDash](https://github.com/eegdash/EEGDash) streams it
30
+ on demand and returns a PyTorch / braindecode dataset.
31
+
32
+ ```python
33
+ # pip install eegdash
34
+ from eegdash import EEGDashDataset
35
+
36
+ ds = EEGDashDataset(dataset="nm000133", cache_dir="./cache")
37
+ print(len(ds), "recordings")
38
+ ```
39
+
40
+ You can also load it by canonical alias — these are registered classes in `eegdash.dataset`:
41
+
42
+ ```python
43
+ from eegdash.dataset import Alljoined1
44
+ ds = Alljoined1(cache_dir="./cache")
45
+ ```
46
+
47
+ If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout,
48
+ you can also pull it directly:
49
+
50
+ ```python
51
+ from braindecode.datasets import BaseConcatDataset
52
+ ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000133")
53
+ ```
54
+
55
+
56
+ ## Dataset metadata
57
+
58
+ | | |
59
+ |---|---|
60
+ | **Subjects** | 8 |
61
+ | **Recordings** | 13 |
62
+ | **Tasks (count)** | 1 |
63
+ | **Channels** | 64 (×13) |
64
+ | **Sampling rate (Hz)** | 512 (×13) |
65
+ | **Size on disk** | 7.6 GB |
66
+ | **Recording type** | EEG |
67
+ | **Source** | nemar |
68
+ | **License** | CC-BY-NC-ND-4.0 |
69
+
70
+ ## Links
71
+
72
+ - **DOI:** [10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133)
73
+ - **NEMAR:** [nm000133](https://nemar.org/dataexplorer/detail?dataset_id=nm000133)
74
+ - **Browse 700+ datasets:** [EEGDash catalog](https://huggingface.co/spaces/EEGDash/catalog)
75
+ - **Docs:** <https://eegdash.org>
76
+ - **Code:** <https://github.com/eegdash/EEGDash>
77
+
78
+ ---
79
+
80
+ _Auto-generated from [dataset_summary.csv](https://github.com/eegdash/EEGDash/blob/main/eegdash/dataset/dataset_summary.csv) and the [EEGDash API](https://data.eegdash.org/api/eegdash/datasets/summary/nm000133). Do not edit this file by hand — update the upstream source and re-run `scripts/push_metadata_stubs.py`._
eegdash.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_id": "nm000133",
3
+ "title": "Alljoined1",
4
+ "source": "nemar",
5
+ "source_url": "https://openneuro.org/datasets/nm000133",
6
+ "doi": "10.82901/nemar.nm000133",
7
+ "license": "CC-BY-NC-ND-4.0",
8
+ "loader": {
9
+ "library": "eegdash",
10
+ "class": "EEGDashDataset",
11
+ "kwargs": {
12
+ "dataset": "nm000133"
13
+ }
14
+ },
15
+ "catalog": "https://huggingface.co/spaces/EEGDash/catalog",
16
+ "generated_by": "huggingface-space/scripts/push_metadata_stubs.py"
17
+ }