Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,75 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
code
|
| 5 |
+
|
| 6 |
+
Markdown
|
| 7 |
+
# Calcium-Bridged Temporal EEG Decoder
|
| 8 |
+
|
| 9 |
+
This project explores the idea of decoding EEG brain signals by modeling perception as a sequential process. Instead of treating the brain's response as a single event, this system breaks it down into distinct temporal windows, attempting to model the "chain of thought" as a visual concept crystallizes in the mind.
|
| 10 |
+
|
| 11 |
+
The project consists of two main components:
|
| 12 |
+
1. **A trainer (`pkas_cal_trainer_gemini.py`)** that builds a novel neural network model using the **Alljoined1 dataset**.
|
| 13 |
+
2. **A viewer (`pkas_cal_viewer_gemini2.py`)** that loads a trained model and provides an interactive visualization of its "thought process" on new EEG samples.
|
| 14 |
+
|
| 15 |
+
## Core Concept: The "Vibecoded" System
|
| 16 |
+
|
| 17 |
+
The central idea of this project is a system inspired by neuromorphic computing and constraint satisfaction, which we've nicknamed the "vibecoded" system.
|
| 18 |
+
|
| 19 |
+
Here’s how it works simply:
|
| 20 |
+
|
| 21 |
+
1. **Thinking in Moments:** The brain's response to an image (e.g., from 0 to 600ms) is not analyzed all at once. It's sliced into four distinct "thinking moments" or time windows based on known ERP components.
|
| 22 |
+
|
| 23 |
+
2. **A Solver for Each Moment:** Each time window is processed by a special `CalciumAttentionModule`. This module's job is to look at the EEG clues in its slice and find the best explanation that satisfies all the "constraints" in the signal.
|
| 24 |
+
|
| 25 |
+
3. **The Calcium Bridge:** This is the key. The "hunch" or "focus" (`Calcium` state) from one thinking moment is passed to the next. This creates a causal chain of thought, allowing the model to refine its predictions over time from a general gist to a more specific concept.
|
| 26 |
+
|
| 27 |
+
## Requirements
|
| 28 |
+
- Python 3.x
|
| 29 |
+
- PyTorch
|
| 30 |
+
- `datasets` (from Hugging Face)
|
| 31 |
+
- `tkinter` (usually included with Python)
|
| 32 |
+
- `matplotlib`
|
| 33 |
+
- `pillow`
|
| 34 |
+
|
| 35 |
+
You can install the main dependencies with pip:
|
| 36 |
+
pip install torch datasets matplotlib pillow
|
| 37 |
+
code
|
| 38 |
+
Code
|
| 39 |
+
## Setup and Usage
|
| 40 |
+
|
| 41 |
+
### 1. Download Data and Model
|
| 42 |
+
|
| 43 |
+
**Data:**
|
| 44 |
+
- **COCO Images:** Download the 2017 training/validation images from the [COCO Dataset official site](https://cocodataset.org/#download). You will need `train2017.zip` and/or `val2017.zip`. Unzip them into a known directory.
|
| 45 |
+
- **COCO Annotations:** On the same site, download the "2017 Train/Val annotations". You only need the `instances_train2017.json` file.
|
| 46 |
+
- **Alljoined1 EEG Data:** This will be downloaded automatically by the scripts on their first run.
|
| 47 |
+
|
| 48 |
+
**Pre-trained Model (Recommended):**
|
| 49 |
+
- You can download the pre-trained V2 model directly from its [Hugging Face Repository](https://huggingface.co/Aluode/CalciumBridgeEEGConstraintViewer/tree/main). Click on `calcium_bridge_eeg_model_v2.pth` and then click the "download" button.
|
| 50 |
+
|
| 51 |
+
### 2. Viewing the Results (Using the Pre-trained Model)
|
| 52 |
+
|
| 53 |
+
1. Run the V2 viewer script:
|
| 54 |
+
```
|
| 55 |
+
python pkas_cal_viewer_gemini2.py
|
| 56 |
+
```
|
| 57 |
+
2. In the GUI:
|
| 58 |
+
- Select the COCO image and annotation paths you downloaded.
|
| 59 |
+
- Click **"Load V2 Model"** and select the `calcium_bridge_eeg_model_v2.pth` file you downloaded from Hugging Face.
|
| 60 |
+
3. Once the model is loaded, click **"Test Random Sample"** to see the model's analysis of a new brain signal.
|
| 61 |
+
|
| 62 |
+
### 3. Training Your Own Model (Optional)
|
| 63 |
+
|
| 64 |
+
1. Run the V2 training script:
|
| 65 |
+
``` python pkas_cal_trainer_gemini.py
|
| 66 |
+
```
|
| 67 |
+
2. In the GUI, select your COCO image and annotation paths.
|
| 68 |
+
3. Click **"Train Extended Model (V2)"**.
|
| 69 |
+
4. A new file named `calcium_bridge_eeg_model_v2.pth` will be saved with the best-performing model from your training run. You can then load this file into the viewer.
|
| 70 |
+
|
| 71 |
+
## A Note on Interpretation
|
| 72 |
+
|
| 73 |
+
This is an exploratory research tool. The model's predictions should **not** be interpreted as literal "mind-reading."
|
| 74 |
+
|
| 75 |
+
Instead, the results reflect the complex **statistical associations** learned from the multi-subject `Alljoined` dataset. When the model associates a "horse trailer" with "horse," it is because this is a strong, common conceptual link found in the aggregate brain data. The viewer is a window into the "cognitive gestalt" of an "average mind" as represented by the dataset.
|