metadata
tags:
- fMRI
- foundation_model
- neuroscience
SLIM-BRAIN: A DATA- AND TRAINING-EFFICIENT FOUNDATION MODEL FOR FMRI DATA ANALYSIS
This repository contains the official implementation of SLIM-Brain. SLIM-Brain is a two-stage, selective-compute pipeline for voxel-level fMRI representation learning. A lightweight global branch ranks informative temporal windows; a high-capacity 4D HieraβJEPA encoder processes only those windows, focusing compute on brain voxels and drastically reducing memory.
Installation
Setting up the environment requires Python 3.13 and CUDA-compatible PyTorch for GPU acceleration:
conda create -n hiera-jepa python=3.13.5
conda activate hiera-jepa
# Install dependencies
pip install -r requirements.txt
Project Structure
The codebase is organized into modular components for easy navigation and extension:
hiera-jepa/
βββ configs/ # YAML configuration files for training and model parameters
βββ checkpoints/ # Saved model weights and training checkpoints
βββ hiera/ # Hierarchical Vision Transformer backbone implementation
βββ scripts/ # Bash....
βββ finetune.py # Downstream task training and feature extraction script
βββ requirements.txt # Python package dependencies
Downstream evaluation
- Ensure your pre-train data structure as follow:
data_root/
βββ ABIDE_train/
βββ ABIDE_val/
βββ HCP_val/
βββ HCP_train/
βββ 0010001/ # Subject ID
βββ 0010002/
βββ 0010002_run-1_0000-0199_1.npz # Data chunk 1
βββ 0010002_run-1_0000-0199_2.npz # Data chunk 2
- Loading downstream datasets as following data structure:
task:
csv: "/path/to/data_csv"
data:
data_root: /path/to/data_root
datasets: ["HCP"]
mode: "directory"
- Start downstream training:
# running downstream training
sh scripts/finetune.sh
Model Checkpoints
Our pre-trained model weights can be found in the checkpoints directory: ./checkpoints/best_model.pth