bruAristimunha commited on
Commit
56a1cbc
·
verified ·
1 Parent(s): 9e87032

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +216 -0
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - foundation-model
12
+ - convolutional
13
+ ---
14
+
15
+ # SignalJEPA_PostLocal
16
+
17
+ Post-local downstream architecture introduced in signal-JEPA Guetschel, P et al (2024) .
18
+
19
+ > **Architecture-only repository.** This repo documents the
20
+ > `braindecode.models.SignalJEPA_PostLocal` class. **No pretrained weights are
21
+ > distributed here** — instantiate the model and train it on your own
22
+ > data, or fine-tune from a published foundation-model checkpoint
23
+ > separately.
24
+
25
+ ## Quick start
26
+
27
+ ```bash
28
+ pip install braindecode
29
+ ```
30
+
31
+ ```python
32
+ from braindecode.models import SignalJEPA_PostLocal
33
+
34
+ model = SignalJEPA_PostLocal(
35
+ n_chans=22,
36
+ sfreq=250,
37
+ input_window_seconds=4.0,
38
+ n_outputs=4,
39
+ )
40
+ ```
41
+
42
+ The signal-shape arguments above are example defaults — adjust them
43
+ to match your recording.
44
+
45
+ ## Documentation
46
+
47
+ - Full API reference (parameters, references, architecture figure):
48
+ <https://braindecode.org/stable/generated/braindecode.models.SignalJEPA_PostLocal.html>
49
+ - Interactive browser with live instantiation:
50
+ <https://huggingface.co/spaces/braindecode/model-explorer>
51
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/signal_jepa.py#L749>
52
+
53
+ ## Architecture description
54
+
55
+ The block below is the rendered class docstring (parameters,
56
+ references, architecture figure where available).
57
+
58
+ <div class='bd-doc'><main>
59
+ <p>Post-local downstream architecture introduced in signal-JEPA Guetschel, P et al (2024) [1]_.</p>
60
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
61
+
62
+ :bdg-dark-line:`Channel`<span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#d9534f;color:white;font-size:11px;font-weight:600;margin-right:4px;">Foundation Model</span>
63
+
64
+
65
+
66
+ This architecture is one of the variants of :class:`SignalJEPA`
67
+ that can be used for classification purposes.
68
+
69
+ .. figure:: https://braindecode.org/dev/_static/model/sjepa_post-local.jpg
70
+ :align: center
71
+ :alt: sJEPA Pre-Local.
72
+
73
+ .. versionadded:: 0.9
74
+
75
+ .. rubric:: Pretrained Weights
76
+
77
+ Only the feature encoder weights are reused from the shared
78
+ SSL checkpoints. This model has no channel embedding nor transformer,
79
+ so ``strict=False`` is required at load time to skip the unused keys.
80
+ Either hub variant works; the ``_without-chans`` one is slightly
81
+ smaller.
82
+
83
+ .. important::
84
+ **Pre-trained Weights Available**
85
+
86
+ .. code:: python
87
+ from braindecode.models import SignalJEPA_PostLocal
88
+
89
+ model = SignalJEPA_PostLocal.from_pretrained(
90
+ "braindecode/signal-jepa_without-chans",
91
+ n_chans=22,
92
+ input_window_seconds=16.0,
93
+ n_outputs=4,
94
+ strict=False,
95
+ )
96
+
97
+ Requires installing ``braindecode[hub]`` for Hub integration.
98
+
99
+ .. rubric:: Usage
100
+
101
+ .. code:: python
102
+ from braindecode.models import SignalJEPA_PostLocal
103
+
104
+ model = SignalJEPA_PostLocal(
105
+ n_chans=22,
106
+ input_window_seconds=16.0,
107
+ sfreq=128,
108
+ n_outputs=4, # e.g., 4-class classification
109
+ )
110
+
111
+ # Forward: (batch, n_chans, n_times) -> (batch, n_outputs)
112
+ output = model(eeg_data)
113
+
114
+ .. warning::
115
+
116
+ Pre-trained at **128 Hz** on EEG bandpass-filtered between
117
+ **0.5 and 40 Hz** and rescaled by a factor of :math:`10^{6}`
118
+ (volts to microvolts). Apply the same preprocessing to your
119
+ data to match the pre-training distribution.
120
+
121
+ Parameters
122
+ ----------
123
+ n_spat_filters : int
124
+ Number of spatial filters.
125
+
126
+ References
127
+ ----------
128
+ .. [1] Guetschel, P., Moreau, T., & Tangermann, M. (2024).
129
+ S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention.
130
+ In 9th Graz Brain-Computer Interface Conference, https://www.doi.org/10.3217/978-3-99161-014-4-003
131
+
132
+ .. rubric:: Hugging Face Hub integration
133
+
134
+ When the optional ``huggingface_hub`` package is installed, all models
135
+ automatically gain the ability to be pushed to and loaded from the
136
+ Hugging Face Hub. Install with::
137
+
138
+ pip install braindecode[hub]
139
+
140
+ **Pushing a model to the Hub:**
141
+
142
+ .. code::
143
+ from braindecode.models import SignalJEPA_PostLocal
144
+
145
+ # Train your model
146
+ model = SignalJEPA_PostLocal(n_chans=22, n_outputs=4, n_times=1000)
147
+ # ... training code ...
148
+
149
+ # Push to the Hub
150
+ model.push_to_hub(
151
+ repo_id="username/my-signaljepa_postlocal-model",
152
+ commit_message="Initial model upload",
153
+ )
154
+
155
+ **Loading a model from the Hub:**
156
+
157
+ .. code::
158
+ from braindecode.models import SignalJEPA_PostLocal
159
+
160
+ # Load pretrained model
161
+ model = SignalJEPA_PostLocal.from_pretrained("username/my-signaljepa_postlocal-model")
162
+
163
+ # Load with a different number of outputs (head is rebuilt automatically)
164
+ model = SignalJEPA_PostLocal.from_pretrained("username/my-signaljepa_postlocal-model", n_outputs=4)
165
+
166
+ **Extracting features and replacing the head:**
167
+
168
+ .. code::
169
+ import torch
170
+
171
+ x = torch.randn(1, model.n_chans, model.n_times)
172
+ # Extract encoder features (consistent dict across all models)
173
+ out = model(x, return_features=True)
174
+ features = out["features"]
175
+
176
+ # Replace the classification head
177
+ model.reset_head(n_outputs=10)
178
+
179
+ **Saving and restoring full configuration:**
180
+
181
+ .. code::
182
+ import json
183
+
184
+ config = model.get_config() # all __init__ params
185
+ with open("config.json", "w") as f:
186
+ json.dump(config, f)
187
+
188
+ model2 = SignalJEPA_PostLocal.from_config(config) # reconstruct (no weights)
189
+
190
+ All model parameters (both EEG-specific and model-specific such as
191
+ dropout rates, activation functions, number of filters) are automatically
192
+ saved to the Hub and restored when loading.
193
+
194
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
195
+ </div>
196
+
197
+ ## Citation
198
+
199
+ Please cite both the original paper for this architecture (see the
200
+ *References* section above) and braindecode:
201
+
202
+ ```bibtex
203
+ @article{aristimunha2025braindecode,
204
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
205
+ author = {Aristimunha, Bruno and others},
206
+ journal = {Zenodo},
207
+ year = {2025},
208
+ doi = {10.5281/zenodo.17699192},
209
+ }
210
+ ```
211
+
212
+ ## License
213
+
214
+ BSD-3-Clause for the model code (matching braindecode).
215
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
216
+ inherit the licence of that checkpoint and its training corpus.