bruAristimunha commited on
Commit
3f21c41
·
verified ·
1 Parent(s): 457ff84

Replace with clean markdown card

Browse files
Files changed (1) hide show
  1. README.md +24 -161
README.md CHANGED
@@ -14,13 +14,12 @@ tags:
14
 
15
  # LUNA
16
 
17
- LUNA from Döner et al .
18
 
19
- > **Architecture-only repository.** This repo documents the
20
  > `braindecode.models.LUNA` class. **No pretrained weights are
21
- > distributed here** instantiate the model and train it on your own
22
- > data, or fine-tune from a published foundation-model checkpoint
23
- > separately.
24
 
25
  ## Quick start
26
 
@@ -39,179 +38,43 @@ model = LUNA(
39
  )
40
  ```
41
 
42
- The signal-shape arguments above are example defaults — adjust them
43
- to match your recording.
44
 
45
  ## Documentation
46
-
47
- - Full API reference (parameters, references, architecture figure):
48
- <https://braindecode.org/stable/generated/braindecode.models.LUNA.html>
49
- - Interactive browser with live instantiation:
50
  <https://huggingface.co/spaces/braindecode/model-explorer>
51
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/luna.py#L30>
52
 
53
- ## Architecture description
54
-
55
- The block below is the rendered class docstring (parameters,
56
- references, architecture figure where available).
57
-
58
- <div class='bd-doc'><main>
59
- <p>LUNA from Döner et al [LUNA]_.</p>
60
- <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span> <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#d9534f;color:white;font-size:11px;font-weight:600;margin-right:4px;">Foundation Model</span>
61
-
62
- :bdg-dark-line:`Channel`
63
-
64
- .. figure:: https://arxiv.org/html/2510.22257v1/x1.png
65
- :align: center
66
- :alt: LUNA Architecture.
67
-
68
- LUNA is a topology-invariant EEG model that processes signals from varying
69
- numbers of channels using a channel-unification mechanism with learned queries.
70
-
71
- The architecture consists of:
72
- 1. Patch Feature Extraction (temporal CNN + FFT-based features)
73
- 2. Channel-Unification Module (cross-attention with learned queries)
74
- 3. Patch-wise Temporal Encoder (RoPE-based transformer)
75
- 4. Decoder Heads (classification or reconstruction)
76
-
77
- .. important::
78
- **Pre-trained Weights Available**
79
-
80
- This model has pre-trained weights available on the Hugging Face Hub
81
- at `PulpBio/LUNA <https://huggingface.co/PulpBio/LUNA>`_.
82
-
83
- Available model variants:
84
-
85
- - **LUNA_base.safetensors** - Base model (embed_dim=64, num_queries=4, depth=8)
86
- - **LUNA_large.safetensors** - Large model (embed_dim=96, num_queries=6, depth=10)
87
- - **LUNA_huge.safetensors** - Huge model (embed_dim=128, num_queries=8, depth=24)
88
-
89
- Example loading for fine-tuning:
90
-
91
- .. code:: python
92
- from braindecode.models import LUNA
93
-
94
- # Load pre-trained base model from Hugging Face Hub
95
- model = LUNA.from_pretrained(
96
- "PulpBio/LUNA",
97
- filename="LUNA_base.safetensors",
98
- n_outputs=2,
99
- n_chans=22,
100
- n_times=1000,
101
- embed_dim=64,
102
- num_queries=4,
103
- depth=8,
104
- )
105
-
106
- To push your own trained model to the Hub:
107
-
108
- .. code:: python
109
- # After training your model
110
- model.push_to_hub(
111
- repo_id="username/my-luna-model", commit_message="Upload trained LUNA model"
112
- )
113
-
114
- Requires installing ``braindecode[hug]`` for Hub integration.
115
-
116
- Parameters
117
- ----------
118
- patch_size : int
119
- Number of time samples per patch. Default: 40.
120
- num_queries : int
121
- Number of learned queries for channel unification.
122
- Paper uses: 4 (Base), 6 (Large), 8 (Huge). Default: 4.
123
- embed_dim : int
124
- Embedding dimension for patch features.
125
- Paper uses: 64 (Base), 96 (Large), 128 (Huge). Default: 64.
126
- depth : int
127
- Number of transformer encoder blocks.
128
- Paper uses: 8 (Base), 10 (Large), 24 (Huge). Default: 8.
129
- num_heads : int
130
- Number of attention heads in channel unification.
131
- Default: 2.
132
- mlp_ratio : float
133
- Ratio of MLP hidden dimension to embedding dimension. Default: 4.0.
134
- norm_layer : nn.Module
135
- Normalization layer class. Default: nn.LayerNorm.
136
- drop_path : float
137
- Stochastic depth rate. Default: 0.0.
138
-
139
- References
140
- ----------
141
- .. [LUNA] Döner, B., Ingolfsson, T. M., Benini, L., & Li, Y. (2025).
142
- LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis.
143
- The Thirty-Ninth Annual Conference on Neural Information Processing Systems - NeurIPS.
144
- Retrieved from https://openreview.net/forum?id=uazfjnFL0G
145
-
146
- .. rubric:: Hugging Face Hub integration
147
-
148
- When the optional ``huggingface_hub`` package is installed, all models
149
- automatically gain the ability to be pushed to and loaded from the
150
- Hugging Face Hub. Install with::
151
-
152
- pip install braindecode[hub]
153
-
154
- **Pushing a model to the Hub:**
155
-
156
- .. code::
157
- from braindecode.models import LUNA
158
-
159
- # Train your model
160
- model = LUNA(n_chans=22, n_outputs=4, n_times=1000)
161
- # ... training code ...
162
-
163
- # Push to the Hub
164
- model.push_to_hub(
165
- repo_id="username/my-luna-model",
166
- commit_message="Initial model upload",
167
- )
168
-
169
- **Loading a model from the Hub:**
170
 
171
- .. code::
172
- from braindecode.models import LUNA
173
 
174
- # Load pretrained model
175
- model = LUNA.from_pretrained("username/my-luna-model")
176
 
177
- # Load with a different number of outputs (head is rebuilt automatically)
178
- model = LUNA.from_pretrained("username/my-luna-model", n_outputs=4)
179
 
180
- **Extracting features and replacing the head:**
181
 
182
- .. code::
183
- import torch
 
 
 
 
 
 
 
 
184
 
185
- x = torch.randn(1, model.n_chans, model.n_times)
186
- # Extract encoder features (consistent dict across all models)
187
- out = model(x, return_features=True)
188
- features = out["features"]
189
-
190
- # Replace the classification head
191
- model.reset_head(n_outputs=10)
192
-
193
- **Saving and restoring full configuration:**
194
-
195
- .. code::
196
- import json
197
-
198
- config = model.get_config() # all __init__ params
199
- with open("config.json", "w") as f:
200
- json.dump(config, f)
201
 
202
- model2 = LUNA.from_config(config) # reconstruct (no weights)
203
 
204
- All model parameters (both EEG-specific and model-specific such as
205
- dropout rates, activation functions, number of filters) are automatically
206
- saved to the Hub and restored when loading.
207
 
208
- See :ref:`load-pretrained-models` for a complete tutorial.</main>
209
- </div>
210
 
211
  ## Citation
212
 
213
- Please cite both the original paper for this architecture (see the
214
- *References* section above) and braindecode:
215
 
216
  ```bibtex
217
  @article{aristimunha2025braindecode,
 
14
 
15
  # LUNA
16
 
17
+ LUNA from Döner et al [LUNA].
18
 
19
+ > **Architecture-only repository.** Documents the
20
  > `braindecode.models.LUNA` class. **No pretrained weights are
21
+ > distributed here.** Instantiate the model and train it on your own
22
+ > data.
 
23
 
24
  ## Quick start
25
 
 
38
  )
39
  ```
40
 
41
+ The signal-shape arguments above are illustrative defaults — adjust to
42
+ match your recording.
43
 
44
  ## Documentation
45
+ - Full API reference: <https://braindecode.org/stable/generated/braindecode.models.LUNA.html>
46
+ - Interactive browser (live instantiation, parameter counts):
 
 
47
  <https://huggingface.co/spaces/braindecode/model-explorer>
48
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/luna.py#L30>
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
+ ## Architecture
 
52
 
53
+ ![LUNA architecture](https://arxiv.org/html/2510.22257v1/x1.png)
 
54
 
 
 
55
 
56
+ ## Parameters
57
 
58
+ | Parameter | Type | Description |
59
+ |---|---|---|
60
+ | `patch_size` | int | Number of time samples per patch. Default: 40. |
61
+ | `num_queries` | int | Number of learned queries for channel unification. Paper uses: 4 (Base), 6 (Large), 8 (Huge). Default: 4. |
62
+ | `embed_dim` | int | Embedding dimension for patch features. Paper uses: 64 (Base), 96 (Large), 128 (Huge). Default: 64. |
63
+ | `depth` | int | Number of transformer encoder blocks. Paper uses: 8 (Base), 10 (Large), 24 (Huge). Default: 8. |
64
+ | `num_heads` | int | Number of attention heads in channel unification. Default: 2. |
65
+ | `mlp_ratio` | float | Ratio of MLP hidden dimension to embedding dimension. Default: 4.0. |
66
+ | `norm_layer` | nn.Module | Normalization layer class. Default: nn.LayerNorm. |
67
+ | `drop_path` | float | Stochastic depth rate. Default: 0.0. |
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
+ ## References
71
 
72
+ 1. Döner, B., Ingolfsson, T. M., Benini, L., & Li, Y. (2025). LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis. The Thirty-Ninth Annual Conference on Neural Information Processing Systems - NeurIPS. Retrieved from https://openreview.net/forum?id=uazfjnFL0G
 
 
73
 
 
 
74
 
75
  ## Citation
76
 
77
+ Cite the original architecture paper (see *References* above) and braindecode:
 
78
 
79
  ```bibtex
80
  @article{aristimunha2025braindecode,