Replace with clean markdown card
Browse files
README.md
CHANGED
|
@@ -13,13 +13,12 @@ tags:
|
|
| 13 |
|
| 14 |
# EEGInceptionMI
|
| 15 |
|
| 16 |
-
EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021)
|
| 17 |
|
| 18 |
-
> **Architecture-only repository.**
|
| 19 |
> `braindecode.models.EEGInceptionMI` class. **No pretrained weights are
|
| 20 |
-
> distributed here**
|
| 21 |
-
> data
|
| 22 |
-
> separately.
|
| 23 |
|
| 24 |
## Quick start
|
| 25 |
|
|
@@ -38,143 +37,41 @@ model = EEGInceptionMI(
|
|
| 38 |
)
|
| 39 |
```
|
| 40 |
|
| 41 |
-
The signal-shape arguments above are
|
| 42 |
-
|
| 43 |
|
| 44 |
## Documentation
|
| 45 |
-
|
| 46 |
-
-
|
| 47 |
-
<https://braindecode.org/stable/generated/braindecode.models.EEGInceptionMI.html>
|
| 48 |
-
- Interactive browser with live instantiation:
|
| 49 |
<https://huggingface.co/spaces/braindecode/model-explorer>
|
| 50 |
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eeginception_mi.py#L14>
|
| 51 |
|
| 52 |
-
## Architecture description
|
| 53 |
-
|
| 54 |
-
The block below is the rendered class docstring (parameters,
|
| 55 |
-
references, architecture figure where available).
|
| 56 |
-
|
| 57 |
-
<div class='bd-doc'><main>
|
| 58 |
-
<p>EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021) [1]_</p>
|
| 59 |
-
<span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
.. figure:: https://content.cld.iop.org/journals/1741-2552/18/4/046014/revision3/jneabed81f1_hr.jpg
|
| 64 |
-
:align: center
|
| 65 |
-
:alt: EEGInceptionMI Architecture
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
The model is strongly based on the original InceptionNet for computer
|
| 69 |
-
vision. The main goal is to extract features in parallel with different
|
| 70 |
-
scales. The network has two blocks made of 3 inception modules with a skip
|
| 71 |
-
connection.
|
| 72 |
-
|
| 73 |
-
The model is fully described in [1]_.
|
| 74 |
-
|
| 75 |
-
Notes
|
| 76 |
-
-----
|
| 77 |
-
This implementation is not guaranteed to be correct, has not been checked
|
| 78 |
-
by original authors, only reimplemented bosed on the paper [1]_.
|
| 79 |
-
|
| 80 |
-
Parameters
|
| 81 |
-
----------
|
| 82 |
-
input_window_seconds : float, optional
|
| 83 |
-
Size of the input, in seconds. Set to 4.5 s as in [1]_ for dataset
|
| 84 |
-
BCI IV 2a.
|
| 85 |
-
sfreq : float, optional
|
| 86 |
-
EEG sampling frequency in Hz. Defaults to 250 Hz as in [1]_ for dataset
|
| 87 |
-
BCI IV 2a.
|
| 88 |
-
n_convs : int, optional
|
| 89 |
-
Number of convolution per inception wide branching. Defaults to 5 as
|
| 90 |
-
in [1]_ for dataset BCI IV 2a.
|
| 91 |
-
n_filters : int, optional
|
| 92 |
-
Number of convolutional filters for all layers of this type. Set to 48
|
| 93 |
-
as in [1]_ for dataset BCI IV 2a.
|
| 94 |
-
kernel_unit_s : float, optional
|
| 95 |
-
Size in seconds of the basic 1D convolutional kernel used in inception
|
| 96 |
-
modules. Each convolutional layer in such modules have kernels of
|
| 97 |
-
increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7,
|
| 98 |
-
0.9 here for ``n_convs=5``). Defaults to 0.1 s.
|
| 99 |
-
activation: nn.Module
|
| 100 |
-
Activation function. Defaults to ReLU activation.
|
| 101 |
-
|
| 102 |
-
References
|
| 103 |
-
----------
|
| 104 |
-
.. [1] Zhang, C., Kim, Y. K., & Eskandarian, A. (2021).
|
| 105 |
-
EEG-inception: an accurate and robust end-to-end neural network
|
| 106 |
-
for EEG-based motor imagery classification.
|
| 107 |
-
Journal of Neural Engineering, 18(4), 046014.
|
| 108 |
-
|
| 109 |
-
.. rubric:: Hugging Face Hub integration
|
| 110 |
-
|
| 111 |
-
When the optional ``huggingface_hub`` package is installed, all models
|
| 112 |
-
automatically gain the ability to be pushed to and loaded from the
|
| 113 |
-
Hugging Face Hub. Install with::
|
| 114 |
-
|
| 115 |
-
pip install braindecode[hub]
|
| 116 |
-
|
| 117 |
-
**Pushing a model to the Hub:**
|
| 118 |
-
|
| 119 |
-
.. code::
|
| 120 |
-
from braindecode.models import EEGInceptionMI
|
| 121 |
-
|
| 122 |
-
# Train your model
|
| 123 |
-
model = EEGInceptionMI(n_chans=22, n_outputs=4, n_times=1000)
|
| 124 |
-
# ... training code ...
|
| 125 |
-
|
| 126 |
-
# Push to the Hub
|
| 127 |
-
model.push_to_hub(
|
| 128 |
-
repo_id="username/my-eeginceptionmi-model",
|
| 129 |
-
commit_message="Initial model upload",
|
| 130 |
-
)
|
| 131 |
-
|
| 132 |
-
**Loading a model from the Hub:**
|
| 133 |
-
|
| 134 |
-
.. code::
|
| 135 |
-
from braindecode.models import EEGInceptionMI
|
| 136 |
-
|
| 137 |
-
# Load pretrained model
|
| 138 |
-
model = EEGInceptionMI.from_pretrained("username/my-eeginceptionmi-model")
|
| 139 |
-
|
| 140 |
-
# Load with a different number of outputs (head is rebuilt automatically)
|
| 141 |
-
model = EEGInceptionMI.from_pretrained("username/my-eeginceptionmi-model", n_outputs=4)
|
| 142 |
-
|
| 143 |
-
**Extracting features and replacing the head:**
|
| 144 |
|
| 145 |
-
|
| 146 |
-
import torch
|
| 147 |
|
| 148 |
-
|
| 149 |
-
# Extract encoder features (consistent dict across all models)
|
| 150 |
-
out = model(x, return_features=True)
|
| 151 |
-
features = out["features"]
|
| 152 |
|
| 153 |
-
# Replace the classification head
|
| 154 |
-
model.reset_head(n_outputs=10)
|
| 155 |
|
| 156 |
-
|
| 157 |
|
| 158 |
-
|
| 159 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 160 |
|
| 161 |
-
config = model.get_config() # all __init__ params
|
| 162 |
-
with open("config.json", "w") as f:
|
| 163 |
-
json.dump(config, f)
|
| 164 |
|
| 165 |
-
|
| 166 |
|
| 167 |
-
|
| 168 |
-
dropout rates, activation functions, number of filters) are automatically
|
| 169 |
-
saved to the Hub and restored when loading.
|
| 170 |
|
| 171 |
-
See :ref:`load-pretrained-models` for a complete tutorial.</main>
|
| 172 |
-
</div>
|
| 173 |
|
| 174 |
## Citation
|
| 175 |
|
| 176 |
-
|
| 177 |
-
*References* section above) and braindecode:
|
| 178 |
|
| 179 |
```bibtex
|
| 180 |
@article{aristimunha2025braindecode,
|
|
|
|
| 13 |
|
| 14 |
# EEGInceptionMI
|
| 15 |
|
| 16 |
+
EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021) [1]
|
| 17 |
|
| 18 |
+
> **Architecture-only repository.** Documents the
|
| 19 |
> `braindecode.models.EEGInceptionMI` class. **No pretrained weights are
|
| 20 |
+
> distributed here.** Instantiate the model and train it on your own
|
| 21 |
+
> data.
|
|
|
|
| 22 |
|
| 23 |
## Quick start
|
| 24 |
|
|
|
|
| 37 |
)
|
| 38 |
```
|
| 39 |
|
| 40 |
+
The signal-shape arguments above are illustrative defaults — adjust to
|
| 41 |
+
match your recording.
|
| 42 |
|
| 43 |
## Documentation
|
| 44 |
+
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.EEGInceptionMI.html>
|
| 45 |
+
- Interactive browser (live instantiation, parameter counts):
|
|
|
|
|
|
|
| 46 |
<https://huggingface.co/spaces/braindecode/model-explorer>
|
| 47 |
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eeginception_mi.py#L14>
|
| 48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
+
## Architecture
|
|
|
|
| 51 |
|
| 52 |
+

|
|
|
|
|
|
|
|
|
|
| 53 |
|
|
|
|
|
|
|
| 54 |
|
| 55 |
+
## Parameters
|
| 56 |
|
| 57 |
+
| Parameter | Type | Description |
|
| 58 |
+
|---|---|---|
|
| 59 |
+
| `input_window_seconds` | float, optional | Size of the input, in seconds. Set to 4.5 s as in [1] for dataset BCI IV 2a. |
|
| 60 |
+
| `sfreq` | float, optional | EEG sampling frequency in Hz. Defaults to 250 Hz as in [1] for dataset BCI IV 2a. |
|
| 61 |
+
| `n_convs` | int, optional | Number of convolution per inception wide branching. Defaults to 5 as in [1] for dataset BCI IV 2a. |
|
| 62 |
+
| `n_filters` | int, optional | Number of convolutional filters for all layers of this type. Set to 48 as in [1] for dataset BCI IV 2a. |
|
| 63 |
+
| `kernel_unit_s` | float, optional | Size in seconds of the basic 1D convolutional kernel used in inception modules. Each convolutional layer in such modules have kernels of increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7, 0.9 here for `n_convs=5`). Defaults to 0.1 s. |
|
| 64 |
+
| `activation: nn.Module` | — | Activation function. Defaults to ReLU activation. |
|
| 65 |
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
+
## References
|
| 68 |
|
| 69 |
+
1. Zhang, C., Kim, Y. K., & Eskandarian, A. (2021). EEG-inception: an accurate and robust end-to-end neural network for EEG-based motor imagery classification. Journal of Neural Engineering, 18(4), 046014.
|
|
|
|
|
|
|
| 70 |
|
|
|
|
|
|
|
| 71 |
|
| 72 |
## Citation
|
| 73 |
|
| 74 |
+
Cite the original architecture paper (see *References* above) and braindecode:
|
|
|
|
| 75 |
|
| 76 |
```bibtex
|
| 77 |
@article{aristimunha2025braindecode,
|