bruAristimunha commited on
Commit
695ce77
·
verified ·
1 Parent(s): dcc84d2

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # ShallowFBCSPNet
15
+
16
+ Shallow ConvNet model from Schirrmeister et al (2017) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.ShallowFBCSPNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import ShallowFBCSPNet
32
+
33
+ model = ShallowFBCSPNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.ShallowFBCSPNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/shallow_fbcsp.py#L24>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>Shallow ConvNet model from Schirrmeister et al (2017) [Schirrmeister2017]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
+
61
+
62
+
63
+ .. figure:: https://onlinelibrary.wiley.com/cms/asset/221ea375-6701-40d3-ab3f-e411aad62d9e/hbm23730-fig-0002-m.jpg
64
+ :align: center
65
+ :alt: ShallowNet Architecture
66
+
67
+ Model described in [Schirrmeister2017]_.
68
+
69
+ Parameters
70
+ ----------
71
+ n_filters_time: int
72
+ Number of temporal filters.
73
+ filter_time_length: int
74
+ Length of the temporal filter.
75
+ n_filters_spat: int
76
+ Number of spatial filters.
77
+ pool_time_length: int
78
+ Length of temporal pooling filter.
79
+ pool_time_stride: int
80
+ Length of stride between temporal pooling filters.
81
+ final_conv_length: int | str
82
+ Length of the final convolution layer.
83
+ If set to "auto", length of the input signal must be specified.
84
+ conv_nonlin: type[nn.Module] | Callable
85
+ Non-linear module class to be used after convolution layers.
86
+ For backward compatibility, callables are also accepted and wrapped
87
+ with :class:`~braindecode.modules.Expression`.
88
+ pool_mode: str
89
+ Method to use on pooling layers. "max" or "mean".
90
+ activation_pool_nonlin: type[nn.Module]
91
+ Non-linear module class to be used after pooling layers.
92
+ split_first_layer: bool
93
+ Split first layer into temporal and spatial layers (True) or just use temporal (False).
94
+ There would be no non-linearity between the split layers.
95
+ batch_norm: bool
96
+ Whether to use batch normalisation.
97
+ batch_norm_alpha: float
98
+ Momentum for BatchNorm2d.
99
+ drop_prob: float
100
+ Dropout probability.
101
+
102
+ References
103
+ ----------
104
+ .. [Schirrmeister2017] Schirrmeister, R. T., Springenberg, J. T., Fiederer,
105
+ L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F.
106
+ & Ball, T. (2017).
107
+ Deep learning with convolutional neural networks for EEG decoding and
108
+ visualization.
109
+ Human Brain Mapping , Aug. 2017.
110
+ Online: http://dx.doi.org/10.1002/hbm.23730
111
+
112
+ .. rubric:: Hugging Face Hub integration
113
+
114
+ When the optional ``huggingface_hub`` package is installed, all models
115
+ automatically gain the ability to be pushed to and loaded from the
116
+ Hugging Face Hub. Install with::
117
+
118
+ pip install braindecode[hub]
119
+
120
+ **Pushing a model to the Hub:**
121
+
122
+ .. code::
123
+ from braindecode.models import ShallowFBCSPNet
124
+
125
+ # Train your model
126
+ model = ShallowFBCSPNet(n_chans=22, n_outputs=4, n_times=1000)
127
+ # ... training code ...
128
+
129
+ # Push to the Hub
130
+ model.push_to_hub(
131
+ repo_id="username/my-shallowfbcspnet-model",
132
+ commit_message="Initial model upload",
133
+ )
134
+
135
+ **Loading a model from the Hub:**
136
+
137
+ .. code::
138
+ from braindecode.models import ShallowFBCSPNet
139
+
140
+ # Load pretrained model
141
+ model = ShallowFBCSPNet.from_pretrained("username/my-shallowfbcspnet-model")
142
+
143
+ # Load with a different number of outputs (head is rebuilt automatically)
144
+ model = ShallowFBCSPNet.from_pretrained("username/my-shallowfbcspnet-model", n_outputs=4)
145
+
146
+ **Extracting features and replacing the head:**
147
+
148
+ .. code::
149
+ import torch
150
+
151
+ x = torch.randn(1, model.n_chans, model.n_times)
152
+ # Extract encoder features (consistent dict across all models)
153
+ out = model(x, return_features=True)
154
+ features = out["features"]
155
+
156
+ # Replace the classification head
157
+ model.reset_head(n_outputs=10)
158
+
159
+ **Saving and restoring full configuration:**
160
+
161
+ .. code::
162
+ import json
163
+
164
+ config = model.get_config() # all __init__ params
165
+ with open("config.json", "w") as f:
166
+ json.dump(config, f)
167
+
168
+ model2 = ShallowFBCSPNet.from_config(config) # reconstruct (no weights)
169
+
170
+ All model parameters (both EEG-specific and model-specific such as
171
+ dropout rates, activation functions, number of filters) are automatically
172
+ saved to the Hub and restored when loading.
173
+
174
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
175
+ </div>
176
+
177
+ ## Citation
178
+
179
+ Please cite both the original paper for this architecture (see the
180
+ *References* section above) and braindecode:
181
+
182
+ ```bibtex
183
+ @article{aristimunha2025braindecode,
184
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
185
+ author = {Aristimunha, Bruno and others},
186
+ journal = {Zenodo},
187
+ year = {2025},
188
+ doi = {10.5281/zenodo.17699192},
189
+ }
190
+ ```
191
+
192
+ ## License
193
+
194
+ BSD-3-Clause for the model code (matching braindecode).
195
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
196
+ inherit the licence of that checkpoint and its training corpus.