bruAristimunha commited on
Commit
a2845a1
Β·
verified Β·
1 Parent(s): 695ce77

Replace with clean markdown card

Browse files
Files changed (1) hide show
  1. README.md +29 -128
README.md CHANGED
@@ -13,13 +13,12 @@ tags:
13
 
14
  # ShallowFBCSPNet
15
 
16
- Shallow ConvNet model from Schirrmeister et al (2017) .
17
 
18
- > **Architecture-only repository.** This repo documents the
19
  > `braindecode.models.ShallowFBCSPNet` class. **No pretrained weights are
20
- > distributed here** β€” instantiate the model and train it on your own
21
- > data, or fine-tune from a published foundation-model checkpoint
22
- > separately.
23
 
24
  ## Quick start
25
 
@@ -38,146 +37,48 @@ model = ShallowFBCSPNet(
38
  )
39
  ```
40
 
41
- The signal-shape arguments above are example defaults β€” adjust them
42
- to match your recording.
43
 
44
  ## Documentation
45
-
46
- - Full API reference (parameters, references, architecture figure):
47
- <https://braindecode.org/stable/generated/braindecode.models.ShallowFBCSPNet.html>
48
- - Interactive browser with live instantiation:
49
  <https://huggingface.co/spaces/braindecode/model-explorer>
50
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/shallow_fbcsp.py#L24>
51
 
52
- ## Architecture description
53
-
54
- The block below is the rendered class docstring (parameters,
55
- references, architecture figure where available).
56
-
57
- <div class='bd-doc'><main>
58
- <p>Shallow ConvNet model from Schirrmeister et al (2017) [Schirrmeister2017]_.</p>
59
- <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
-
61
-
62
-
63
- .. figure:: https://onlinelibrary.wiley.com/cms/asset/221ea375-6701-40d3-ab3f-e411aad62d9e/hbm23730-fig-0002-m.jpg
64
- :align: center
65
- :alt: ShallowNet Architecture
66
-
67
- Model described in [Schirrmeister2017]_.
68
-
69
- Parameters
70
- ----------
71
- n_filters_time: int
72
- Number of temporal filters.
73
- filter_time_length: int
74
- Length of the temporal filter.
75
- n_filters_spat: int
76
- Number of spatial filters.
77
- pool_time_length: int
78
- Length of temporal pooling filter.
79
- pool_time_stride: int
80
- Length of stride between temporal pooling filters.
81
- final_conv_length: int | str
82
- Length of the final convolution layer.
83
- If set to "auto", length of the input signal must be specified.
84
- conv_nonlin: type[nn.Module] | Callable
85
- Non-linear module class to be used after convolution layers.
86
- For backward compatibility, callables are also accepted and wrapped
87
- with :class:`~braindecode.modules.Expression`.
88
- pool_mode: str
89
- Method to use on pooling layers. "max" or "mean".
90
- activation_pool_nonlin: type[nn.Module]
91
- Non-linear module class to be used after pooling layers.
92
- split_first_layer: bool
93
- Split first layer into temporal and spatial layers (True) or just use temporal (False).
94
- There would be no non-linearity between the split layers.
95
- batch_norm: bool
96
- Whether to use batch normalisation.
97
- batch_norm_alpha: float
98
- Momentum for BatchNorm2d.
99
- drop_prob: float
100
- Dropout probability.
101
-
102
- References
103
- ----------
104
- .. [Schirrmeister2017] Schirrmeister, R. T., Springenberg, J. T., Fiederer,
105
- L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F.
106
- & Ball, T. (2017).
107
- Deep learning with convolutional neural networks for EEG decoding and
108
- visualization.
109
- Human Brain Mapping , Aug. 2017.
110
- Online: http://dx.doi.org/10.1002/hbm.23730
111
-
112
- .. rubric:: Hugging Face Hub integration
113
-
114
- When the optional ``huggingface_hub`` package is installed, all models
115
- automatically gain the ability to be pushed to and loaded from the
116
- Hugging Face Hub. Install with::
117
-
118
- pip install braindecode[hub]
119
-
120
- **Pushing a model to the Hub:**
121
-
122
- .. code::
123
- from braindecode.models import ShallowFBCSPNet
124
 
125
- # Train your model
126
- model = ShallowFBCSPNet(n_chans=22, n_outputs=4, n_times=1000)
127
- # ... training code ...
128
 
129
- # Push to the Hub
130
- model.push_to_hub(
131
- repo_id="username/my-shallowfbcspnet-model",
132
- commit_message="Initial model upload",
133
- )
134
 
135
- **Loading a model from the Hub:**
136
 
137
- .. code::
138
- from braindecode.models import ShallowFBCSPNet
139
 
140
- # Load pretrained model
141
- model = ShallowFBCSPNet.from_pretrained("username/my-shallowfbcspnet-model")
 
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
- # Load with a different number of outputs (head is rebuilt automatically)
144
- model = ShallowFBCSPNet.from_pretrained("username/my-shallowfbcspnet-model", n_outputs=4)
145
 
146
- **Extracting features and replacing the head:**
147
 
148
- .. code::
149
- import torch
150
 
151
- x = torch.randn(1, model.n_chans, model.n_times)
152
- # Extract encoder features (consistent dict across all models)
153
- out = model(x, return_features=True)
154
- features = out["features"]
155
-
156
- # Replace the classification head
157
- model.reset_head(n_outputs=10)
158
-
159
- **Saving and restoring full configuration:**
160
-
161
- .. code::
162
- import json
163
-
164
- config = model.get_config() # all __init__ params
165
- with open("config.json", "w") as f:
166
- json.dump(config, f)
167
-
168
- model2 = ShallowFBCSPNet.from_config(config) # reconstruct (no weights)
169
-
170
- All model parameters (both EEG-specific and model-specific such as
171
- dropout rates, activation functions, number of filters) are automatically
172
- saved to the Hub and restored when loading.
173
-
174
- See :ref:`load-pretrained-models` for a complete tutorial.</main>
175
- </div>
176
 
177
  ## Citation
178
 
179
- Please cite both the original paper for this architecture (see the
180
- *References* section above) and braindecode:
181
 
182
  ```bibtex
183
  @article{aristimunha2025braindecode,
 
13
 
14
  # ShallowFBCSPNet
15
 
16
+ Shallow ConvNet model from Schirrmeister et al (2017) [Schirrmeister2017].
17
 
18
+ > **Architecture-only repository.** Documents the
19
  > `braindecode.models.ShallowFBCSPNet` class. **No pretrained weights are
20
+ > distributed here.** Instantiate the model and train it on your own
21
+ > data.
 
22
 
23
  ## Quick start
24
 
 
37
  )
38
  ```
39
 
40
+ The signal-shape arguments above are illustrative defaults β€” adjust to
41
+ match your recording.
42
 
43
  ## Documentation
44
+ - Full API reference: <https://braindecode.org/stable/generated/braindecode.models.ShallowFBCSPNet.html>
45
+ - Interactive browser (live instantiation, parameter counts):
 
 
46
  <https://huggingface.co/spaces/braindecode/model-explorer>
47
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/shallow_fbcsp.py#L24>
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
+ ## Architecture
 
 
51
 
52
+ ![ShallowFBCSPNet architecture](https://onlinelibrary.wiley.com/cms/asset/221ea375-6701-40d3-ab3f-e411aad62d9e/hbm23730-fig-0002-m.jpg)
 
 
 
 
53
 
 
54
 
55
+ ## Parameters
 
56
 
57
+ | Parameter | Type | Description |
58
+ |---|---|---|
59
+ | `n_filters_time: int` | β€” | Number of temporal filters. |
60
+ | `filter_time_length: int` | β€” | Length of the temporal filter. |
61
+ | `n_filters_spat: int` | β€” | Number of spatial filters. |
62
+ | `pool_time_length: int` | β€” | Length of temporal pooling filter. |
63
+ | `pool_time_stride: int` | β€” | Length of stride between temporal pooling filters. |
64
+ | `final_conv_length: int | str` | β€” | Length of the final convolution layer. If set to "auto", length of the input signal must be specified. |
65
+ | `conv_nonlin: type[nn.Module] | Callable` | β€” | Non-linear module class to be used after convolution layers. For backward compatibility, callables are also accepted and wrapped with :class:`~braindecode.modules.Expression`. |
66
+ | `pool_mode: str` | β€” | Method to use on pooling layers. "max" or "mean". |
67
+ | `activation_pool_nonlin: type[nn.Module]` | β€” | Non-linear module class to be used after pooling layers. |
68
+ | `split_first_layer: bool` | β€” | Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers. |
69
+ | `batch_norm: bool` | β€” | Whether to use batch normalisation. |
70
+ | `batch_norm_alpha: float` | β€” | Momentum for BatchNorm2d. |
71
+ | `drop_prob: float` | β€” | Dropout probability. |
72
 
 
 
73
 
74
+ ## References
75
 
76
+ 1. Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
 
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Citation
80
 
81
+ Cite the original architecture paper (see *References* above) and braindecode:
 
82
 
83
  ```bibtex
84
  @article{aristimunha2025braindecode,