bruAristimunha commited on
Commit
c167449
·
verified ·
1 Parent(s): b52a4b6

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +199 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # TIDNet
15
+
16
+ Thinker Invariance DenseNet model from Kostas et al (2020) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.TIDNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import TIDNet
32
+
33
+ model = TIDNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.TIDNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/tidnet.py#L13>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>Thinker Invariance DenseNet model from Kostas et al (2020) [TIDNet]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
+
61
+
62
+
63
+ .. figure:: https://content.cld.iop.org/journals/1741-2552/17/5/056008/revision3/jneabb7a7f1_hr.jpg
64
+ :align: center
65
+ :alt: TIDNet Architecture
66
+
67
+ See [TIDNet]_ for details.
68
+
69
+ Parameters
70
+ ----------
71
+ s_growth : int
72
+ DenseNet-style growth factor (added filters per DenseFilter)
73
+ t_filters : int
74
+ Number of temporal filters.
75
+ drop_prob : float
76
+ Dropout probability
77
+ pooling : int
78
+ Max temporal pooling (width and stride)
79
+ temp_layers : int
80
+ Number of temporal layers
81
+ spat_layers : int
82
+ Number of DenseFilters
83
+ temp_span : float
84
+ Percentage of n_times that defines the temporal filter length:
85
+ temp_len = ceil(temp_span * n_times)
86
+ e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal
87
+ filter of length 75.
88
+ bottleneck : int
89
+ Bottleneck factor within Densefilter
90
+ summary : int
91
+ Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated
92
+ automatically (n_times // pooling).
93
+ in_chans :
94
+ Alias for n_chans.
95
+ n_classes:
96
+ Alias for n_outputs.
97
+ input_window_samples :
98
+ Alias for n_times.
99
+ activation: nn.Module, default=nn.LeakyReLU
100
+ Activation function class to apply. Should be a PyTorch activation
101
+ module class like ``nn.ReLU`` or ``nn.ELU``. Default is ``nn.LeakyReLU``.
102
+
103
+ Notes
104
+ -----
105
+ Code adapted from: https://github.com/SPOClab-ca/ThinkerInvariance/
106
+
107
+ References
108
+ ----------
109
+ .. [TIDNet] Kostas, D. & Rudzicz, F.
110
+ Thinker invariance: enabling deep neural networks for BCI across more
111
+ people.
112
+ J. Neural Eng. 17, 056008 (2020).
113
+ doi: 10.1088/1741-2552/abb7a7.
114
+
115
+ .. rubric:: Hugging Face Hub integration
116
+
117
+ When the optional ``huggingface_hub`` package is installed, all models
118
+ automatically gain the ability to be pushed to and loaded from the
119
+ Hugging Face Hub. Install with::
120
+
121
+ pip install braindecode[hub]
122
+
123
+ **Pushing a model to the Hub:**
124
+
125
+ .. code::
126
+ from braindecode.models import TIDNet
127
+
128
+ # Train your model
129
+ model = TIDNet(n_chans=22, n_outputs=4, n_times=1000)
130
+ # ... training code ...
131
+
132
+ # Push to the Hub
133
+ model.push_to_hub(
134
+ repo_id="username/my-tidnet-model",
135
+ commit_message="Initial model upload",
136
+ )
137
+
138
+ **Loading a model from the Hub:**
139
+
140
+ .. code::
141
+ from braindecode.models import TIDNet
142
+
143
+ # Load pretrained model
144
+ model = TIDNet.from_pretrained("username/my-tidnet-model")
145
+
146
+ # Load with a different number of outputs (head is rebuilt automatically)
147
+ model = TIDNet.from_pretrained("username/my-tidnet-model", n_outputs=4)
148
+
149
+ **Extracting features and replacing the head:**
150
+
151
+ .. code::
152
+ import torch
153
+
154
+ x = torch.randn(1, model.n_chans, model.n_times)
155
+ # Extract encoder features (consistent dict across all models)
156
+ out = model(x, return_features=True)
157
+ features = out["features"]
158
+
159
+ # Replace the classification head
160
+ model.reset_head(n_outputs=10)
161
+
162
+ **Saving and restoring full configuration:**
163
+
164
+ .. code::
165
+ import json
166
+
167
+ config = model.get_config() # all __init__ params
168
+ with open("config.json", "w") as f:
169
+ json.dump(config, f)
170
+
171
+ model2 = TIDNet.from_config(config) # reconstruct (no weights)
172
+
173
+ All model parameters (both EEG-specific and model-specific such as
174
+ dropout rates, activation functions, number of filters) are automatically
175
+ saved to the Hub and restored when loading.
176
+
177
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
178
+ </div>
179
+
180
+ ## Citation
181
+
182
+ Please cite both the original paper for this architecture (see the
183
+ *References* section above) and braindecode:
184
+
185
+ ```bibtex
186
+ @article{aristimunha2025braindecode,
187
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
188
+ author = {Aristimunha, Bruno and others},
189
+ journal = {Zenodo},
190
+ year = {2025},
191
+ doi = {10.5281/zenodo.17699192},
192
+ }
193
+ ```
194
+
195
+ ## License
196
+
197
+ BSD-3-Clause for the model code (matching braindecode).
198
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
199
+ inherit the licence of that checkpoint and its training corpus.