brainlm / README.md
ahof1704's picture
Update README.md
7589f35
|
raw
history blame
5.67 kB
metadata
license: cc-by-nc-nd-4.0

BrainLM model

The pretrained model of Brain Language Model (BrainLM) aims to achieve a general understanding of brain dynamics through self-supervised masked prediction. It is introduced in this paper and its code is available at this repository

Model Details

Model Description

We introduce the Brain Language Model (BrainLM), a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings. Utilizing self-supervised masked-prediction training, BrainLM demonstrates proficiency in both fine-tuning and zero-shot inference tasks. Fine-tuning allows for the prediction of clinical variables and future brain states. In zero-shot inference, the model identifies functional networks and generates interpretable latent representations of neural activity. Furthermore, we introduce a novel prompting technique, allowing BrainLM to function as an in silico simulator of brain activity responses to perturbations. BrainLM offers a novel framework for the analysis and understanding of large-scale brain activity data, serving as a “lens” through which new data can be more effectively interpreted.

  • Developed by: van Dijk Lab at Yale University
  • Model type: ViTMAE
  • License: Preprint License: CC BY-NC-ND 4.0

Model Sources [optional]

Uses

BrainLM is a versatile foundation model for fMRI analysis. It can be used for:

  • Decoding cognitive variables and mental health biomarkers from brain activity patterns
  • Predicting future brain states by learning spatiotemporal fMRI dynamics
  • Discovering intrinsic functional networks in the brain without supervision
  • Perturbation analysis to simulate the effect of interventions on brain activity

Out-of-Scope Use

Currently, this model has been trained and tested only on fMRI data. There are no guarantees regarding its performance on different modalities of brain recordings.

Bias, Risks, and Limitations

  • The model was trained only on healthy adults, so may not generalize to other populations
  • The fMRI data has limited spatial-temporal resolution and BOLD signals are an indirect measure of neural activity
  • The model has only been evaluated on reconstruction and simple regression/classification tasks so far
  • Attention weights provide one method of interpretation but have known limitations

Recommendations

  • Downstream applications of the model should undergo careful testing and validation before clinical deployment.
  • Like any AI system, model predictions should be carefully reviewed by domain experts before informing decision-making.

How to Get Started with the Model

Use the code below to get started with the model.

Training Details

Data

Data stats:

  • UK Biobank (UKB): 76,296 recordings (~6450 hours)
  • Human Connectome Project (HCP): 1002 recordings (~250 hours)

Preprocessing Steps:

  • Motion Correction
  • Normalization
  • Temporal Filtering
  • ICA Denoising

Feature Extraction:

  • Brain Parcellation: AAL-424 atlas is used to divide the brain into 424 regions.
  • Temporal Resolution: ~1 Hz with 0.735s for UKB and 0.72s for HCP.
  • Dimensionality: 424-dimensional time series per scan.

Data Scaling

  • Robust scaling was applied, involving the subtraction of the median and division by the interquartile range across subjects for each parcel.

Data split:

  • Training data: 80% of the UKB dataset
  • Validation data: 10% of the UKB dataset
  • Test data: 10% of the UKB dataset and HCP dataset

Training Procedure

BrainLM was pretrained on fMRI recordings from the UK Biobank and HCP datasets. Recordings were parcellated, embedded, masked, and reconstructed via a Transformer autoencoder. The model was evaluated on held-out test partitions of both datasets.

Objective: Mean squared error loss between original and predicted parcels

Pretraining:

  • 100 epochs
  • Batch size 512
  • Adam optimizer
  • Masking ratios: 20%, 75% and 90%

Downstream training: Fine-tuning on future state prediction and regression/classification clinical variables

Metrics

In this work, we use the following metrics to evaluate the model's performance:

  • Reconstruction error (MSE between predicted and original parcel timeseries)
  • Clinical variable regression error (e.g. age, neuroticism scores)
  • Functional network classification accuracy

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

BibTeX:

 @article{ortega2023brainlm,
  title={BrainLM: A foundation model for brain activity recordings},
  author={Ortega Caro, Josue and Oliveira Fonseca, Antonio Henrique and Averill, Christopher and Rizvi, Syed A and Rosati, Matteo and Cross, James L and Mittal, Prateek and Zappala, Emanuele and Levine, Daniel and Dhodapkar, Rahul M and others},
  journal={bioRxiv},
  pages={2023--09},
  year={2023},
  publisher={Cold Spring Harbor Laboratory}
}