bruAristimunha commited on
Commit
0ee3d50
·
verified ·
1 Parent(s): 87b3062

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # EEGSimpleConv
15
+
16
+ EEGSimpleConv from Ouahidi, YE et al (2023) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.EEGSimpleConv` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import EEGSimpleConv
32
+
33
+ model = EEGSimpleConv(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.EEGSimpleConv.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eegsimpleconv.py#L21>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>EEGSimpleConv from Ouahidi, YE et al (2023) [Yassine2023]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
+
61
+
62
+
63
+ .. figure:: https://raw.githubusercontent.com/elouayas/EEGSimpleConv/refs/heads/main/architecture.png
64
+ :align: center
65
+ :alt: EEGSimpleConv Architecture
66
+
67
+ EEGSimpleConv is a 1D Convolutional Neural Network originally designed
68
+ for decoding motor imagery from EEG signals. The model aims to have a
69
+ very simple and straightforward architecture that allows a low latency,
70
+ while still achieving very competitive performance.
71
+
72
+ EEG-SimpleConv starts with a 1D convolutional layer, where each EEG channel
73
+ enters a separate 1D convolutional channel. This is followed by a series of
74
+ blocks of two 1D convolutional layers. Between the two convolutional layers
75
+ of each block is a max pooling layer, which downsamples the data by a factor
76
+ of 2. Each convolution is followed by a batch normalisation layer and a ReLU
77
+ activation function. Finally, a global average pooling (in the time domain)
78
+ is performed to obtain a single value per feature map, which is then fed
79
+ into a linear layer to obtain the final classification prediction output.
80
+
81
+ The paper and original code with more details about the methodological
82
+ choices are available at the [Yassine2023]_ and [Yassine2023Code]_.
83
+
84
+ The input shape should be three-dimensional matrix representing the EEG
85
+ signals.
86
+
87
+ ``(batch_size, n_channels, n_timesteps)``.
88
+
89
+ Notes
90
+ -----
91
+ The authors recommend using the default parameters for MI decoding.
92
+ Please refer to the original paper and code for more details.
93
+
94
+ Recommended range for the choice of the hyperparameters, regarding the
95
+ evaluation paradigm.
96
+
97
+ | Parameter | Within-Subject | Cross-Subject |
98
+ | feature_maps | [64-144] | [64-144] |
99
+ | n_convs | 1 | [2-4] |
100
+ | resampling_freq | [70-100] | [50-80] |
101
+ | kernel_size | [12-17] | [5-8] |
102
+
103
+
104
+ An intensive ablation study is included in the paper to understand the
105
+ of each parameter on the model performance.
106
+
107
+ .. versionadded:: 0.9
108
+
109
+ Parameters
110
+ ----------
111
+ feature_maps: int
112
+ Number of Feature Maps at the first Convolution, width of the model.
113
+ n_convs: int
114
+ Number of blocks of convolutions (2 convolutions per block), depth of the model.
115
+ resampling: int
116
+ Resampling Frequency.
117
+ kernel_size: int
118
+ Size of the convolutions kernels.
119
+ activation: nn.Module, default=nn.ELU
120
+ Activation function class to apply. Should be a PyTorch activation
121
+ module class like ``nn.ReLU`` or ``nn.ELU``. Default is ``nn.ELU``.
122
+
123
+ References
124
+ ----------
125
+ .. [Yassine2023] Yassine El Ouahidi, V. Gripon, B. Pasdeloup, G. Bouallegue
126
+ N. Farrugia, G. Lioi, 2023. A Strong and Simple Deep Learning Baseline for
127
+ BCI Motor Imagery Decoding. Arxiv preprint. arxiv.org/abs/2309.07159
128
+ .. [Yassine2023Code] Yassine El Ouahidi, V. Gripon, B. Pasdeloup, G. Bouallegue
129
+ N. Farrugia, G. Lioi, 2023. A Strong and Simple Deep Learning Baseline for
130
+ BCI Motor Imagery Decoding. GitHub repository.
131
+ https://github.com/elouayas/EEGSimpleConv.
132
+
133
+ .. rubric:: Hugging Face Hub integration
134
+
135
+ When the optional ``huggingface_hub`` package is installed, all models
136
+ automatically gain the ability to be pushed to and loaded from the
137
+ Hugging Face Hub. Install with::
138
+
139
+ pip install braindecode[hub]
140
+
141
+ **Pushing a model to the Hub:**
142
+
143
+ .. code::
144
+ from braindecode.models import EEGSimpleConv
145
+
146
+ # Train your model
147
+ model = EEGSimpleConv(n_chans=22, n_outputs=4, n_times=1000)
148
+ # ... training code ...
149
+
150
+ # Push to the Hub
151
+ model.push_to_hub(
152
+ repo_id="username/my-eegsimpleconv-model",
153
+ commit_message="Initial model upload",
154
+ )
155
+
156
+ **Loading a model from the Hub:**
157
+
158
+ .. code::
159
+ from braindecode.models import EEGSimpleConv
160
+
161
+ # Load pretrained model
162
+ model = EEGSimpleConv.from_pretrained("username/my-eegsimpleconv-model")
163
+
164
+ # Load with a different number of outputs (head is rebuilt automatically)
165
+ model = EEGSimpleConv.from_pretrained("username/my-eegsimpleconv-model", n_outputs=4)
166
+
167
+ **Extracting features and replacing the head:**
168
+
169
+ .. code::
170
+ import torch
171
+
172
+ x = torch.randn(1, model.n_chans, model.n_times)
173
+ # Extract encoder features (consistent dict across all models)
174
+ out = model(x, return_features=True)
175
+ features = out["features"]
176
+
177
+ # Replace the classification head
178
+ model.reset_head(n_outputs=10)
179
+
180
+ **Saving and restoring full configuration:**
181
+
182
+ .. code::
183
+ import json
184
+
185
+ config = model.get_config() # all __init__ params
186
+ with open("config.json", "w") as f:
187
+ json.dump(config, f)
188
+
189
+ model2 = EEGSimpleConv.from_config(config) # reconstruct (no weights)
190
+
191
+ All model parameters (both EEG-specific and model-specific such as
192
+ dropout rates, activation functions, number of filters) are automatically
193
+ saved to the Hub and restored when loading.
194
+
195
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
196
+ </div>
197
+
198
+ ## Citation
199
+
200
+ Please cite both the original paper for this architecture (see the
201
+ *References* section above) and braindecode:
202
+
203
+ ```bibtex
204
+ @article{aristimunha2025braindecode,
205
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
206
+ author = {Aristimunha, Bruno and others},
207
+ journal = {Zenodo},
208
+ year = {2025},
209
+ doi = {10.5281/zenodo.17699192},
210
+ }
211
+ ```
212
+
213
+ ## License
214
+
215
+ BSD-3-Clause for the model code (matching braindecode).
216
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
217
+ inherit the licence of that checkpoint and its training corpus.