Datasets:

bibtex_url
null
proceedings
stringlengths
58
58
bibtext
stringlengths
511
974
abstract
stringlengths
92
2k
title
stringlengths
30
207
authors
sequencelengths
1
22
id
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
14 values
n_linked_authors
int64
-1
1
upvotes
int64
-1
1
num_comments
int64
-1
0
n_authors
int64
-1
10
Models
sequencelengths
0
4
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
4
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
unique_id
int64
0
855
null
https://papers.miccai.org/miccai-2024/paper/0250_paper.pdf
@InProceedings{ Lin_Learning_MICCAI2024, author = { Lin, Yiqun and Wang, Hualiang and Chen, Jixiang and Li, Xiaomeng }, title = { { Learning 3D Gaussians for Extremely Sparse-View Cone-Beam CT Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Cone-Beam Computed Tomography (CBCT) is an indispensable technique in medical imaging, yet the associated radiation exposure raises concerns in clinical practice. To mitigate these risks, sparse-view reconstruction has emerged as an essential research direction, aiming to reduce the radiation dose by utilizing fewer projections for CT reconstruction. Although implicit neural representations have been introduced for sparse-view CBCT reconstruction, existing methods primarily focus on local 2D features queried from sparse projections, which is insufficient to process the more complicated anatomical structures, such as the chest. To this end, we propose a novel reconstruction framework, namely DIF-Gaussian, which leverages 3D Gaussians to represent the feature distribution in the 3D space, offering additional 3D spatial information to facilitate the estimation of attenuation coefficients. Furthermore, we incorporate test-time optimization during inference to further improve the generalization capability of the model. We evaluate DIF-Gaussian on two public datasets, showing significantly superior reconstruction performance than previous state-of-the-art methods.
Learning 3D Gaussians for Extremely Sparse-View Cone-Beam CT Reconstruction
[ "Lin, Yiqun", "Wang, Hualiang", "Chen, Jixiang", "Li, Xiaomeng" ]
Conference
2407.01090
[ "https://github.com/xmed-lab/DIF-Gaussian" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
300
null
https://papers.miccai.org/miccai-2024/paper/1221_paper.pdf
@InProceedings{ Pan_PGMLIF_MICCAI2024, author = { Pan, Xipeng and An, Yajun and Lan, Rushi and Liu, Zhenbing and Liu, Zaiyi and Lu, Cheng and Yang, Huihua }, title = { { PG-MLIF: Multimodal Low-rank Interaction Fusion Framework Integrating Pathological Images and Genomic Data for Cancer Prognosis Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Precise prognostication can assist physicians in developing personalized treatment and follow-up plans, which help enhance the overall survival rates. Recently, enormous amount of research rely on unimodal data for survival prediction, not fully capitalizing on the complementary information available. With this deficiency, we propose a Multimodal Low-rank Interaction Fusion Framework Integrating Pathological images and Genomic data (PG-MLIF) for survival prediction. In this framework, we leverage the gating-based modality attention mechanism (MAM) for effective filtering at the feature level and propose the optimal weight concatenation (OWC) strategy to maximize the integration of information from pathological images, genomic data, and fused features at the model level. The model introduces a parallel decomposition strategy called low-rank multimodal fusion (LMF) for the first time, which simplifies the complexity and facilitates model contribution-based fusion, addressing the challenge of incomplete and inefficient multimodal fusion. Extensive experiments on the public dataset of GBMLGG and KIRC demonstrate that our PG-MLIF outperforms state-of-the-art survival prediction methods. Additionally, we significantly stratify patients based on the hazard ratios obtained from training the two types of datasets, and the visualization results were generally consistent with the true grade classification. The code is available at: https://github.com/panxipeng/PG-MLIF.
PG-MLIF: Multimodal Low-rank Interaction Fusion Framework Integrating Pathological Images and Genomic Data for Cancer Prognosis Prediction
[ "Pan, Xipeng", "An, Yajun", "Lan, Rushi", "Liu, Zhenbing", "Liu, Zaiyi", "Lu, Cheng", "Yang, Huihua" ]
Conference
[ "https://github.com/panxipeng/PG-MLIF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
301
null
https://papers.miccai.org/miccai-2024/paper/0942_paper.pdf
@InProceedings{ He_Pair_MICCAI2024, author = { He, Jianjun and Cai, Chenyu and Li, Qiong and Ma, Andy J }, title = { { Pair Shuffle Consistency for Semi-supervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Semi-supervised medical image segmentation is a practical but challenging problem, in which only limited pixel-wise annotations are available for training. While most existing methods train a segmentation model by using the labeled and unlabeled data separately, the learning paradigm solely based on unlabeled data is less reliable due to the possible incorrectness of pseudo labels. In this paper, we propose a novel method namely pair shuffle consistency (PSC) learning for semi-supervised medical image segmentation. The pair shuffle operation splits an image pair into patches, and then randomly shuffle them to obtain mixed images. With the shuffled images for training, local information is better interpreted for pixel-wise predictions. The consistency learning of labeled-unlabeled image pairs becomes more reliable, since predictions of the unlabeled data can be learned from those of the labeled data with ground truth. To enhance the model robustness, the consistency constraint on unlabeled-unlabeled image pairs serves as a regularization term, thereby further improving the segmentation performance. Experiments on three benchmarks demonstrate that our method outperforms the state of the art for semi-supervised medical image segmentation.
Pair Shuffle Consistency for Semi-supervised Medical Image Segmentation
[ "He, Jianjun", "Cai, Chenyu", "Li, Qiong", "Ma, Andy J" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
302
null
https://papers.miccai.org/miccai-2024/paper/1260_paper.pdf
@InProceedings{ Xu_MiHATPA_MICCAI2024, author = { Xu, Zhufeng and Qin, Jiaxin and Li, Chenhao and Bu, Dechao and Zhao, Yi }, title = { { MiHATP:A Multi-Hybrid Attention Super-Resolution Network for Pathological Image Based on Transformation Pool Contrastive Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Digital pathology slides can serve medical practitioners or aid in computer-assisted diagnosis and treatment. Collection personnel typically employ hyperspectral microscopes to scan pathology slides into Whole Slide Images (WSI) with pixel counts reaching the million level. However, this process incurs significant acquisition time and data storage costs. Utilizing super-resolution imaging techniques to enhance low-resolution pathological images enables downstream analysis of pathological tissue slice data under low-resource and cost-effective medical conditions. Nevertheless, existing super-resolution methods cannot integrate attention information containing variable receptive fields and effective means to handle distortions and artifacts in the output data. This leads to differences between super-resolution images and authentic images depicting cell contours and tissue morphology. We propose a method named MiHATP: A Multi(Mi)-Hybrid(H) Attention(A) Network Based on Transformation(T) Pool(P) Contrastive Learning to address these challenges. By constructing contrastive losses through reversible image transformation and irreversible low-quality image transformation, MiHATP effectively reduces distortion in super-resolution pathological images. Additionally, within MiHATP, we design a Multi-Hybrid Attention structure to ensure strong modeling capability for long-distance and short-distance information, thereby ensuring that the super-resolution network can obtain richer image information. Experimental results demonstrate superior performance compared to existing methods. Furthermore, we conduct tests on the output images of the super-resolution network for downstream cell segmentation and phenotypes tasks, achieving performance similar to that of original high-resolution images.
MiHATP:A Multi-Hybrid Attention Super-Resolution Network for Pathological Image Based on Transformation Pool Contrastive Learning
[ "Xu, Zhufeng", "Qin, Jiaxin", "Li, Chenhao", "Bu, Dechao", "Zhao, Yi" ]
Conference
[ "https://github.com/rabberk/MiHATP.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
303
null
https://papers.miccai.org/miccai-2024/paper/2292_paper.pdf
@InProceedings{ Dwe_Estimating_MICCAI2024, author = { Dwedari, Mohammed Munzer and Consagra, William and Müller, Philip and Turgut, Özgün and Rueckert, Daniel and Rathi, Yogesh }, title = { { Estimating Neural Orientation Distribution Fields on High Resolution Diffusion MRI Scans } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
The Orientation Distribution Function (ODF) characterizes key brain microstructural properties and plays an important role in understanding brain structural connectivity. Recent works introduced Implicit Neural Representation (INR) based approaches to form a spatially aware continuous estimate of the ODF field and demonstrated promising results in key tasks of interest when compared to conventional discrete approaches. However, traditional INR methods face difficulties when scaling to large-scale images, such as modern ultra-high-resolution MRI scans, posing challenges in learning fine structures as well as inefficiencies in training and inference speed. In this work, we propose HashEnc, a grid-hash-encoding-based estimation of the ODF field and demonstrate its effectiveness in retaining structural and textural features. We show that HashEnc achieves a 10% enhancement in image quality while requiring 3x less computational resources than current methods.
Estimating Neural Orientation Distribution Fields on High Resolution Diffusion MRI Scans
[ "Dwedari, Mohammed Munzer", "Consagra, William", "Müller, Philip", "Turgut, Özgün", "Rueckert, Daniel", "Rathi, Yogesh" ]
Conference
2409.09387
[ "https://github.com/MunzerDw/NODF-HashEnc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
304
null
https://papers.miccai.org/miccai-2024/paper/1398_paper.pdf
@InProceedings{ Lei_Weaksupervised_MICCAI2024, author = { Lei, Haijun and Tong, Guanjiie and Su, Huaqiang and Lei, Baiying }, title = { { Weak-supervised Attention Fusion Network for Carotid Artery Vessel Wall Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The automatic and accurate segmentation of the carotid artery vessel wall can assist doctors in clinical diagnosis. Medical images often have complex and blurry features, which makes manual data annotation very difficult and time-consuming. 3D CNN can utilize three-dimensional spatial information to more accurately identify diseased tissues and organ structures, but its segmentation performance is limited due to the lack of global contextual information correlation. This paper proposes a network based on CNN and Transformer to segment the carotid artery vessel wall. By combining the effectiveness of CNN in dealing with 3D image segmentation problems and the global attention mechanism of Transformer, it is possible to better capture and process the features of this information. By designing Joint Attention Structure Block (JAS), semantic information in skip connections can be enhanced. The feature fusion block (FF) is used to associate input information with each layer of feature maps, enhancing the detailed information of the feature maps. The effectiveness of this method has been verified through a large number of comparative experiments.
Weak-supervised Attention Fusion Network for Carotid Artery Vessel Wall Segmentation
[ "Lei, Haijun", "Tong, Guanjiie", "Su, Huaqiang", "Lei, Baiying" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
305
null
https://papers.miccai.org/miccai-2024/paper/0411_paper.pdf
@InProceedings{ Yan_Generating_MICCAI2024, author = { Yang, Jiancheng and Sedykh, Ekaterina and Adhinarta, Jason Ken and Le, Hieu and Fua, Pascal }, title = { { Generating Anatomically Accurate Heart Structures via Neural Implicit Fields } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Implicit functions have significantly advanced shape modeling in diverse fields. Yet, their application within medical imaging often overlooks the intricate interrelations among various anatomical structures, a consideration crucial for accurately modeling complex multi-part structures like the heart. This study presents ImHeart, a latent variable model specifically designed to model complex heart structures. Leveraging the power of learnable templates, ImHeart adeptly captures the nuanced relationships between multiple heart components using a unified deformation field and introduces an implicit registration technique to manage the pose variability in medical data. Built on WHS3D dataset of 140 refined whole-heart structures, ImHeart delivers superior reconstruction accuracy and anatomical fidelity. Moreover, we demonstrate the ImHeart can significantly improve heart segmentation from multi-center MRI scans through a retraining pipeline, adeptly navigating the domain gaps inherent to such data.
Generating Anatomically Accurate Heart Structures via Neural Implicit Fields
[ "Yang, Jiancheng", "Sedykh, Ekaterina", "Adhinarta, Jason Ken", "Le, Hieu", "Fua, Pascal" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
306
null
https://papers.miccai.org/miccai-2024/paper/4128_paper.pdf
@InProceedings{ Li_ASPS_MICCAI2024, author = { Li, Huiqian and Zhang, Dingwen and Yao, Jieru and Han, Longfei and Li, Zhongyu and Han, Junwei }, title = { { ASPS: Augmented Segment Anything Model for Polyp Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Polyp segmentation plays a pivotal role in colorectal cancer diagnosis. Recently, the emergence of the Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation, leveraging its powerful pre-training capability on large-scale datasets. However, due to the domain gap between natural and endoscopy images, SAM encounters two limitations in achieving effective performance in polyp segmentation. Firstly, its Transformer-based structure prioritizes global and low-frequency information, potentially overlooking local details, and introducing bias into the learned features. Secondly, when applied to endoscopy images, its poor out-of-distribution (OOD) performance results in substandard predictions and biased confidence output. To tackle these challenges, we introduce a novel approach named Augmented SAM for Polyp Segmentation (ASPS), equipped with two modules: Cross-branch Feature Augmentation (CFA) and Uncertainty-guided Prediction Regularization (UPR). CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge while enhancing local features and high-frequency details. Moreover, UPR ingeniously leverages SAM’s IoU score to mitigate uncertainty during the training procedure, thereby improving OOD performance and domain generalization. Extensive experimental results demonstrate the effectiveness and utility of the proposed method in improving SAM’s performance in polyp segmentation. Our code is available at https://github.com/HuiqianLi/ASPS.
ASPS: Augmented Segment Anything Model for Polyp Segmentation
[ "Li, Huiqian", "Zhang, Dingwen", "Yao, Jieru", "Han, Longfei", "Li, Zhongyu", "Han, Junwei" ]
Conference
2407.00718
[ "https://github.com/HuiqianLi/ASPS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
307
null
https://papers.miccai.org/miccai-2024/paper/1759_paper.pdf
@InProceedings{ Liu_Learning_MICCAI2024, author = { Liu, Hong and Wei, Dong and Lu, Donghuan and Sun, Jinghan and Zheng, Hao and Zheng, Yefeng and Wang, Liansheng }, title = { { Learning to Segment Multiple Organs from Multimodal Partially Labeled Datasets } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Learning to segment multiple organs from partially labeled medical image datasets can significantly reduce the burden of manual annotation. However, due to the large domain gap, learning from partially labeled datasets of different modalities has not been well addressed in the literature.In addition, the anatomic prior knowledge of various organs is spread in multiple datasets and needs to be more effectively utilized. This work proposes a novel framework for learning to segment multiple organs from multimodal partially labeled datasets (i.e., CT and MRI). Specifically, our framework constructs a cross-modal a priori atlas from training data, which implicitly contains prior knowledge of organ locations, shapes, and sizes. Based on the atlas, three novel modules are proposed to utilize the prior knowledge to address the joint challenges of unlabeled organs and inter-modal domain gaps: 1) to better utilize unlabeled organs for training, we propose an atlas-guided pseudo-label refiner network (APRN) to improve the quality of pseudo-labels; 2) we propose an atlas-conditioned modality alignment network (AMAN) for cross-modal alignment in the label space via adversarial training, forcing cross-modal segmentations of organs labeled in a different modality to match the atlas; and 3) to further align organ-specific semantics in the latent space, we introduce modal-invariant class prototype anchoring modules (MICPAMs) supervised by the atlas-guided refined pseudo-labels, encouraging domain-invariant features for each organ. Extensive experiments on both multimodal and monomodal partially labeled datasets demonstrate the superior performance of our framework to existing state-of-the-art methods and the efficacy of its components.
Learning to Segment Multiple Organs from Multimodal Partially Labeled Datasets
[ "Liu, Hong", "Wei, Dong", "Lu, Donghuan", "Sun, Jinghan", "Zheng, Hao", "Zheng, Yefeng", "Wang, Liansheng" ]
Conference
[ "https://github.com/ccarliu/multimodal-PL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
308
null
https://papers.miccai.org/miccai-2024/paper/1953_paper.pdf
@InProceedings{ Yan_ANew_MICCAI2024, author = { Yan, Yunlu and Zhu, Lei and Li, Yuexiang and Xu, Xinxing and Goh, Rick Siow Mong and Liu, Yong and Khan, Salman and Feng, Chun-Mei }, title = { { A New Perspective to Boost Performance Fairness For Medical Federated Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Improving the fairness of federated learning (FL) benefits healthy and sustainable collaboration, especially for medical applications. However, existing fair FL methods ignore the specific characteristics of medical FL applications, i.e., domain shift among the datasets from different hospitals. In this work, we propose Fed-LWR to improve performance fairness from the perspective of feature shift, a key issue influencing the performance of medical FL systems caused by domain shift. Specifically, we dynamically perceive the bias of the global model across all hospitals by estimating the layer-wise difference in feature representations between local and global models. To minimize global divergence, we assign higher weights to hospitals with larger differences. The estimated client weights help us to re-aggregate the local models per layer to obtain a fairer global model. We evaluate our method on two widely used federated medical image segmentation benchmarks. The results demonstrate that our method achieves better and fairer performance compared with several state-of-the-art fair FL methods.
A New Perspective to Boost Performance Fairness For Medical Federated Learning
[ "Yan, Yunlu", "Zhu, Lei", "Li, Yuexiang", "Xu, Xinxing", "Goh, Rick Siow Mong", "Liu, Yong", "Khan, Salman", "Feng, Chun-Mei" ]
Conference
[ "https://github.com/IAMJackYan/Fed-LWR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
309
null
https://papers.miccai.org/miccai-2024/paper/0603_paper.pdf
@InProceedings{ Fis_SubgroupSpecific_MICCAI2024, author = { Fischer, Paul and Willms, Hannah and Schneider, Moritz and Thorwarth, Daniela and Muehlebach, Michael and Baumgartner, Christian F. }, title = { { Subgroup-Specific Risk-Controlled Dose Estimation in Radiotherapy } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Cancer remains a leading cause of death, highlighting the importance of effective radiotherapy (RT). Magnetic resonance-guided linear accelerators (MR-Linacs) enable imaging during RT, allowing for inter-fraction, and perhaps even intra-fraction, adjustments of treatment plans. However, achieving this requires fast and accurate dose calculations. While Monte Carlo simulations offer accuracy, they are computationally intensive. Deep learning frameworks show promise, yet lack uncertainty quantification crucial for high-risk applications like RT. Risk-controlling prediction sets (RCPS) offer model-agnostic uncertainty quantification with mathematical guarantees. However, we show that naive application of RCPS may lead to only certain subgroups such as the image background being risk-controlled. In this work, we extend RCPS to provide prediction intervals with coverage guarantees for multiple subgroups with unknown subgroup membership at test time. We evaluate our algorithm on real clinical planing volumes from five different anatomical regions and show that our novel subgroup RCPS (SG-RCPS) algorithm leads to prediction intervals that jointly control the risk for multiple subgroups. In particular, our method controls the risk of the crucial voxels along the radiation beam significantly better than conventional RCPS.
Subgroup-Specific Risk-Controlled Dose Estimation in Radiotherapy
[ "Fischer, Paul", "Willms, Hannah", "Schneider, Moritz", "Thorwarth, Daniela", "Muehlebach, Michael", "Baumgartner, Christian F." ]
Conference
2407.08432
[ "https://github.com/paulkogni/SG-RCPS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
310
null
https://papers.miccai.org/miccai-2024/paper/3422_paper.pdf
@InProceedings{ Deu_Neural_MICCAI2024, author = { Deutges, Michael and Sadafi, Ario and Navab, Nassir and Marr, Carsten }, title = { { Neural Cellular Automata for Lightweight, Robust and Explainable Classification of White Blood Cell Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Diagnosis of hematological malignancies depends on accurate identification of white blood cells in peripheral blood smears. Deep learning techniques are emerging as a viable solution to scale and optimize this process by automatic cell classification. However, these techniques face several challenges such as limited generalizability, sensitivity to domain shifts, and lack of explainability. Here, we introduce a novel approach for white blood cell classification based on neural cellular automata (NCA). We test our approach on three datasets of white blood cell images and show that we achieve competitive performance compared to conventional methods. Our NCA-based method is significantly smaller in terms of parameters and exhibits robustness to domain shifts. Furthermore, the architecture is inherently explainable, providing insights into the decision process for each classification, which helps to understand and validate model predictions. Our results demonstrate that NCA can be used for image classification, and that they address key challenges of conventional methods, indicating a high potential for applicability in clinical practice.
Neural Cellular Automata for Lightweight, Robust and Explainable Classification of White Blood Cell Images
[ "Deutges, Michael", "Sadafi, Ario", "Navab, Nassir", "Marr, Carsten" ]
Conference
2404.05584
[ "https://github.com/marrlab/WBC-NCA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
311
null
https://papers.miccai.org/miccai-2024/paper/0979_paper.pdf
@InProceedings{ Zha_M2Fusion_MICCAI2024, author = { Zhang, Song and Du, Siyao and Sun, Caixia and Li, Bao and Shao, Lizhi and Zhang, Lina and Wang, Kun and Liu, Zhenyu and Tian, Jie }, title = { { M2Fusion: Multi-time Multimodal Fusion for Prediction of Pathological Complete Response in Breast Cancer } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Accurate identification of patients who achieve pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) is critical before surgery for guiding customized treatment regimens and assessing prognosis in breast cancer. However, current methods for predicting pCR primarily rely on single modality data or single time-point images, which fail to capture tumor changes and comprehensively represent tumor heterogeneity at both macro and micro levels. Additionally, complementary information between modalities is not fully interacted. In this paper, we present M2Fusion, pioneering the fusion of multi-time multimodal data for treatment response prediction, with two key components: the multi-time magnetic resonance imagings (MRIs) contrastive learning loss that learns representations reflecting NAC-induced tumor changes; the orthogonal multimodal fusion module that integrates orthogonal information from MRIs and whole slide images (WSIs). To evaluate the proposed M2Fusion, we collect pre-treatment MRI, post-treatment MRI, and WSIs of biopsy from patients with breast cancer at two different collaborating hospitals, each with the pCR assessed by the standard pathological procedure. Experimental results quantitatively reveal that the proposed M2Fusion improves treatment response prediction and outperforms other multimodal fusion methods and single-modality approaches. Validation on external test sets further demonstrates the generalization and validity of the model. Our code is available at https://github.com/SongZHS/M2Fusion.
M2Fusion: Multi-time Multimodal Fusion for Prediction of Pathological Complete Response in Breast Cancer
[ "Zhang, Song", "Du, Siyao", "Sun, Caixia", "Li, Bao", "Shao, Lizhi", "Zhang, Lina", "Wang, Kun", "Liu, Zhenyu", "Tian, Jie" ]
Conference
[ "https://github.com/SongZHS/M2Fusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
312
null
https://papers.miccai.org/miccai-2024/paper/2801_paper.pdf
@InProceedings{ Zer_AMONuSeg_MICCAI2024, author = { Zerouaoui, Hasnae and Oderinde, Gbenga Peter and Lefdali, Rida and Echihabi, Karima and Akpulu, Stephen Peter and Agbon, Nosereme Abel and Musa, Abraham Sunday and Yeganeh, Yousef and Farshad, Azade and Navab, Nassir }, title = { { AMONuSeg: A Histological Dataset for African Multi-Organ Nuclei Semantic Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Nuclei semantic segmentation is a key component for advancing machine learning and deep learning applications in digital pathology. However, most existing segmentation models are trained and tested on high-quality data acquired with expensive equipment, such as whole slide scanners, which are not accessible to most pathologists in developing countries. These pathologists rely on low-resource data acquired with low-precision microscopes, smartphones, or digital cameras, which have different characteristics and challenges than high-resource data. Therefore, there is a gap between the state-of-the-art segmentation models and the real-world needs of low-resource settings. This work aims to bridge this gap by presenting the first fully annotated African multi-organ dataset for histopathology nuclei semantic segmentation acquired with a low-precision microscope. We also evaluate state-of-the-art segmentation models, including spectral feature extraction encoder and vision transformer-based models, and stain normalization techniques for color normalization of Hematoxylin and Eosin-stained histopathology slides. Our results provide important insights for future research on nuclei histopathology segmentation with low-resource data.
AMONuSeg: A Histological Dataset for African Multi-Organ Nuclei Semantic Segmentation
[ "Zerouaoui, Hasnae", "Oderinde, Gbenga Peter", "Lefdali, Rida", "Echihabi, Karima", "Akpulu, Stephen Peter", "Agbon, Nosereme Abel", "Musa, Abraham Sunday", "Yeganeh, Yousef", "Farshad, Azade", "Navab, Nassir" ]
Conference
[ "https://github.com/zerouaoui/AMONUSEG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
313
null
https://papers.miccai.org/miccai-2024/paper/2599_paper.pdf
@InProceedings{ Guo_Common_MICCAI2024, author = { Guo, Yunpeng and Zeng, Xinyi and Zeng, Pinxian and Fei, Yuchen and Wen, Lu and Zhou, Jiliu and Wang, Yan }, title = { { Common Vision-Language Attention for Text-Guided Medical Image Segmentation of Pneumonia } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Pneumonia, recognized as a severe respiratory disease, has attracted widespread attention in the wake of the COVID-19 pandemic, underscoring the critical need for precise diagnosis and effective treatment. Despite significant advancements in the automatic segmentation of lung infection areas using medical imaging, most current approaches rely solely on a large quantity of high-quality images for training, which is not practical in clinical settings. Moreover, the unimodal attention mechanisms adopted in conventional vision-language models encounter challenges in effectively preserving and integrating information across modalities. To alleviate these problems, we introduce Text-Guided Common Attention Model (TGCAM), a novel method for text-guided medical image segmentation of pneumonia. Text-Guided means inputting both an image and its corresponding text into the model simultaneously to obtain segmentation results. Specifically, TGCAM encompasses the introduction of Common Attention, a multimodal interaction paradigm between vision and language, applied during the decoding phase. In addition, we present an Iterative Text Enhancement Module that facilitates the progressive refinement of text, thereby augmenting multi-modal interactions. Experiments respectively on public CT and X-ray datasets demonstrated our method outperforms the state-of-the-art methods qualitatively and quantitatively.
Common Vision-Language Attention for Text-Guided Medical Image Segmentation of Pneumonia
[ "Guo, Yunpeng", "Zeng, Xinyi", "Zeng, Pinxian", "Fei, Yuchen", "Wen, Lu", "Zhou, Jiliu", "Wang, Yan" ]
Conference
[ "https://github.com/G-peppa/TGCAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
314
null
https://papers.miccai.org/miccai-2024/paper/0796_paper.pdf
@InProceedings{ Guo_HistGen_MICCAI2024, author = { Guo, Zhengrui and Ma, Jiabo and Xu, Yingxue and Wang, Yihui and Wang, Liansheng and Chen, Hao }, title = { { HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context Interaction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Histopathology serves as the gold standard in cancer diagnosis, with clinical reports being vital in interpreting and understanding this process, guiding cancer treatment and patient care. The automation of histopathology report generation with deep learning stands to significantly enhance clinical efficiency and lessen the labor-intensive, time-consuming burden on pathologists in report writing. In pursuit of this advancement, we introduce HistGen, a multiple instance learning-empowered framework for histopathology report generation together with the first benchmark dataset for evaluation. Inspired by diagnostic and report-writing workflows, HistGen features two delicately designed modules, aiming to boost report generation by aligning whole slide images (WSIs) and diagnostic reports at both local and global granularities. To achieve this, a local-global hierarchical encoder is developed for efficient visual feature aggregation from a region-to-slide perspective. Meanwhile, a cross-modal context module is proposed to explicitly facilitate alignment and interaction between distinct modalities, effectively bridging the gap between the extensive visual sequences of WSIs and corresponding highly summarized reports. Experimental results on WSI report generation show the proposed model outperforms state-of-the-art (SOTA) models by a large margin. Moreover, the results of fine-tuning our model on cancer subtyping and survival analysis tasks further demonstrate superior performance compared to SOTA methods, showcasing strong transfer learning capability. Dataset and code are available here.
HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context Interaction
[ "Guo, Zhengrui", "Ma, Jiabo", "Xu, Yingxue", "Wang, Yihui", "Wang, Liansheng", "Chen, Hao" ]
Conference
2403.05396
[ "https://github.com/dddavid4real/HistGen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
315
null
https://papers.miccai.org/miccai-2024/paper/2830_paper.pdf
@InProceedings{ Zha_Incorporating_MICCAI2024, author = { Zhang, Tiantian and Lin, Manxi and Guo, Hongda and Zhang, Xiaofan and Chiu, Ka Fung Peter and Feragen, Aasa and Dou, Qi }, title = { { Incorporating Clinical Guidelines through Adapting Multi-modal Large Language Model for Prostate Cancer PI-RADS Scoring } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
The Prostate Imaging Reporting and Data System (PI-RADS) is pivotal in the diagnosis of clinically significant prostate cancer through MRI imaging. Current deep learning-based PI-RADS scoring methods often lack the incorporation of common PI-RADS clinical guideline (PICG) utilized by radiologists, potentially compromising scoring accuracy. This paper introduces a novel approach that adapts a multi-modal large language model (MLLM) to incorporate PICG into PI-RADS scoring model without additional annotations and network parameters. We present a designed two-stage fine-tuning process aiming at adapting a MLLM originally trained on natural images to the MRI images while effectively integrating the PICG. Specifically, in the first stage, we develop a domain adapter layer tailored for processing 3D MRI inputs and instruct the MLLM to differentiate MRI sequences. In the second stage, we translate PICG for guiding instructions from the model to generate PICG-guided image features. Through such a feature distillation step, we align the scoring network’s features with the PICG-guided image features, which enables the model to effectively incorporate the PICG information. We develop our model on a public dataset and evaluate it on an in-house dataset. Experimental results demonstrate that our approach effectively improves the performance of current scoring networks. Code is available at: https://github.com/med-air/PICG2scoring
Incorporating Clinical Guidelines through Adapting Multi-modal Large Language Model for Prostate Cancer PI-RADS Scoring
[ "Zhang, Tiantian", "Lin, Manxi", "Guo, Hongda", "Zhang, Xiaofan", "Chiu, Ka Fung Peter", "Feragen, Aasa", "Dou, Qi" ]
Conference
2405.08786
[ "https://github.com/med-air/PICG2scoring" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
316
null
https://papers.miccai.org/miccai-2024/paper/0872_paper.pdf
@InProceedings{ Lim_Diffusionbased_MICCAI2024, author = { Liman, Michelle Espranita and Rueckert, Daniel and Fintelmann, Florian J. and Müller, Philip }, title = { { Diffusion-based Generative Image Outpainting for Recovery of FOV-Truncated CT Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Field-of-view (FOV) recovery of truncated chest CT scans is crucial for accurate body composition analysis, which involves quantifying skeletal muscle and subcutaneous adipose tissue (SAT) on CT slices. This, in turn, enables disease prognostication. Here, we present a method for recovering truncated CT slices using generative image outpainting. We train a diffusion model and apply it to truncated CT slices generated by simulating a small FOV. Our model reliably recovers the truncated anatomy and outperforms the previous state-of-the-art despite being trained on 87% less data. Our code is available at https://github.com/michelleespranita/ct_palette.
Diffusion-based Generative Image Outpainting for Recovery of FOV-Truncated CT Images
[ "Liman, Michelle Espranita", "Rueckert, Daniel", "Fintelmann, Florian J.", "Müller, Philip" ]
Conference
2406.04769
[ "https://github.com/michelleespranita/ct_palette" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
317
null
https://papers.miccai.org/miccai-2024/paper/0621_paper.pdf
@InProceedings{ Don_Multistage_MICCAI2024, author = { Dong, Haichuan and Zhou, Runjie and Yun, Boxiang and Zhou, Huihui and Zhang, Benyan and Li, Qingli and Wang, Yan }, title = { { Multi-stage Multi-granularity Focus-tuned Learning Paradigm for Medical HSI Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Despite significant breakthrough in computational pathology that Medical Hyperspectral Imaging (MHSI) has brought, the asymmetric information in spectral and spatial dimensions pose a primary challenge. In this study, we propose a multi-stage multi-granularity Focus-tuned Learning paradigm for Medical HSI Segmentation. To learn subtle spectral differences while equalizing the spatiospectral feature learning, we design a quadruplet learning pre-training and focus-tuned fine-tuning stages for capturing both disease-level and image-level subtle spectral differences while integrating spatially and spectrally dominant features. We propose an intensifying and weakening strategy throughout all stages. Our method significantly outperforms all competitors in MHSI segmentation, with over 3.5% improvement in DSC. Ablation study further shows our method learns compact spatiospectral features while capturing various levels of spectral differences. Code will be released at https://github.com/DHC233/FL.
Multi-stage Multi-granularity Focus-tuned Learning Paradigm for Medical HSI Segmentation
[ "Dong, Haichuan", "Zhou, Runjie", "Yun, Boxiang", "Zhou, Huihui", "Zhang, Benyan", "Li, Qingli", "Wang, Yan" ]
Conference
[ "https://github.com/DHC233/FL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
318
null
https://papers.miccai.org/miccai-2024/paper/0415_paper.pdf
@InProceedings{ Pło_Swin_MICCAI2024, author = { Płotka, Szymon and Chrabaszcz, Maciej and Biecek, Przemyslaw }, title = { { Swin SMT: Global Sequential Modeling for Enhancing 3D Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Recent advances in Vision Transformers (ViTs) have significantly enhanced medical image segmentation by facilitating the learning of global relationships. However, these methods face a notable challenge in capturing diverse local and global long-range sequential feature representations, particularly evident in whole-body CT (WBCT) scans. To overcome this limitation, we introduce Swin Soft Mixture Transformer (Swin SMT), a novel architecture based on Swin UNETR. This model incorporates a Soft Mixture-of-Experts (Soft MoE) to effectively handle complex and diverse long-range dependencies. The use of Soft MoE allows for scaling up model parameters maintaining a balance between computational complexity and segmentation performance in both training and inference modes. We evaluate Swin SMT on the publicly available TotalSegmentator-V2 dataset, which includes 117 major anatomical structures in WBCT images. Comprehensive experimental results demonstrate that Swin SMT outperforms several state-of-the-art methods in 3D anatomical structure segmentation, achieving an average Dice Similarity Coefficient of 85.09%. The code and pre-trained weights of Swin SMT are publicly available at https://github.com/MI2DataLab/SwinSMT.
Swin SMT: Global Sequential Modeling for Enhancing 3D Medical Image Segmentation
[ "Płotka, Szymon", "Chrabaszcz, Maciej", "Biecek, Przemyslaw" ]
Conference
[ "https://github.com/MI2DataLab/SwinSMT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
319
null
https://papers.miccai.org/miccai-2024/paper/0006_paper.pdf
@InProceedings{ Ma_Spatiotemporal_MICCAI2024, author = { Ma, Xinghua and Zou, Mingye and Fang, Xinyan and Liu, Yang and Luo, Gongning and Wang, Wei and Wang, Kuanquan and Qiu, Zhaowen and Gao, Xin and Li, Shuo }, title = { { Spatio-temporal Contrast Network for Data-efficient Learning of Coronary Artery Disease in Coronary CT Angiography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Coronary artery disease (CAD) poses a significant challenge to cardiovascular patients worldwide, underscoring the crucial role of automated CAD diagnostic technology in clinical settings. Previous methods for diagnosing CAD using coronary artery CT angiography (CCTA) images have certain limitations in widespread replication and clinical application due to the high demand for annotated medical imaging data. In this work, we introduce the Spatio-temporal Contrast Network (SC-Net) for the first time, designed to tackle the challenges of data-efficient learning in CAD diagnosis based on CCTA. SC-Net utilizes data augmentation to facilitate clinical feature learning and leverages spatio-temporal prediction-contrast based on dual tasks to maximize the effectiveness of limited data, thus providing clinically reliable predictive results. Experimental findings from a dataset comprising 218 CCTA images from diverse patients demonstrate that SC-Net achieves outstanding performance in automated CAD diagnosis with a reduced number of training samples. The introduction of SC-Net presents a practical data-efficient learning strategy, thereby facilitating the implementation and application of automated CAD diagnosis across a broader spectrum of clinical scenarios. The source code is publicly available at the following link (https://github.com/PerceptionComputingLab/SC-Net).
Spatio-temporal Contrast Network for Data-efficient Learning of Coronary Artery Disease in Coronary CT Angiography
[ "Ma, Xinghua", "Zou, Mingye", "Fang, Xinyan", "Liu, Yang", "Luo, Gongning", "Wang, Wei", "Wang, Kuanquan", "Qiu, Zhaowen", "Gao, Xin", "Li, Shuo" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
320
null
https://papers.miccai.org/miccai-2024/paper/2701_paper.pdf
@InProceedings{ Jia_IarCAC_MICCAI2024, author = { Jiang, Weili and Li, Yiming and Yi, Zhang and Wang, Jianyong and Chen, Mao }, title = { { IarCAC: Instance-aware Representation for Coronary Artery Calcification Segmentation in Cardiac CT angiography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Coronary Artery Calcification (CAC) is a robust indicator of coronary artery disease and a critical determinant of percutaneous coronary intervention outcomes. Our method is inspired by a clinical observation that CAC typically manifests as a sparse distribution of multiple instances. Existing methods focusing solely on spatial correlation overlook the sparse spatial distribution of semantic connections in CAC tasks. Motivated by this, we introduce a novel instance-aware representation method for CAC segmentation, termed IarCAC, which explicitly leverages the sparse connectivity pattern among instances to enhance the model’s instance discrimination capability. The proposed IarCAC first develops an InstanceViT module, which assesses the connection strength between each pair of tokens, enabling the model to learn instance-specific attention patterns. Subsequently, an instance-aware guided module is introduced to learn sparse high-resolution representations over instance-dependent regions in the Fourier domain. To evaluate the effectiveness of the proposed method, we conducted experiments on two challenging CAC datasets and achieved state-of-the-art performance across all datasets. The code is available at https://github.com/WeiliJiang/IarCAC
IarCAC: Instance-aware Representation for Coronary Artery Calcification Segmentation in Cardiac CT angiography
[ "Jiang, Weili", "Li, Yiming", "Yi, Zhang", "Wang, Jianyong", "Chen, Mao" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
321
null
https://papers.miccai.org/miccai-2024/paper/1899_paper.pdf
@InProceedings{ Bai_NODER_MICCAI2024, author = { Bai, Hao and Hong, Yi }, title = { { NODER: Image Sequence Regression Based on Neural Ordinary Differential Equations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Regression on medical image sequences can capture temporal image pattern changes and predict images at missing or future time points. However, existing geodesic regression methods limit their regression performance by a strong underlying assumption of linear dynamics, while diffusion-based methods have high computational costs and lack constraints to preserve image topology. In this paper, we propose an optimization-based new framework called NODER, which leverages neural ordinary differential equations to capture complex underlying dynamics and reduces its high computational cost of handling high-dimensional image volumes by introducing the latent space. We compare our NODER with two recent regression methods, and the experimental results on ADNI and ACDC datasets demonstrate that our method achieves the SOTA performance in 3D image regression. Our model needs only a couple of images in a sequence for prediction, which is practical, especially for clinical situations where extremely limited image time series are available for analysis.
NODER: Image Sequence Regression Based on Neural Ordinary Differential Equations
[ "Bai, Hao", "Hong, Yi" ]
Conference
2407.13241
[ "https://github.com/ZedKing12138/NODER-pytorch" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
322
null
https://papers.miccai.org/miccai-2024/paper/2993_paper.pdf
@InProceedings{ Raj_Assessing_MICCAI2024, author = { Raj, Ankita and Swaika, Harsh and Varma, Deepankar and Arora, Chetan }, title = { { Assessing Risk of Stealing Proprietary Models for Medical Imaging Tasks } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
The success of deep learning in medical imaging applications has led several companies to deploy proprietary models in diagnostic workflows, offering monetized services. Even though model weights are hidden to protect the intellectual property of the service provider, these models are exposed to model stealing (MS) attacks, where adversaries can clone the model’s functionality by querying it with a proxy dataset and training a thief model on the acquired predictions. While extensively studied on general vision tasks, the susceptibility of medical imaging models to MS attacks remains inadequately explored. This paper investigates the vulnerability of black-box medical imaging models to MS attacks under realistic conditions where the adversary lacks knowledge of the victim model’s training data and operates with limited query budgets. We demonstrate that adversaries can effectively execute MS attacks by using publicly available datasets. To further enhance MS capabilities with limited query budgets, we propose a two-step model stealing approach termed QueryWise. This method capitalizes on unlabeled data obtained from a proxy distribution to train the thief model without incurring additional queries. Evaluation on two medical imaging models for Gallbladder Cancer and COVID-19 classification substantiate the effectiveness of the proposed attack. The source code is available at https://github.com/rajankita/QueryWise.
Assessing Risk of Stealing Proprietary Models for Medical Imaging Tasks
[ "Raj, Ankita", "Swaika, Harsh", "Varma, Deepankar", "Arora, Chetan" ]
Conference
[ "https://github.com/rajankita/QueryWise" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
323
null
https://papers.miccai.org/miccai-2024/paper/3722_paper.pdf
@InProceedings{ Got_PASSION_MICCAI2024, author = { Gottfrois, Philippe and Gröger, Fabian and Andriambololoniaina, Faly Herizo and Amruthalingam, Ludovic and Gonzalez-Jimenez, Alvaro and Hsu, Christophe and Kessy, Agnes and Lionetti, Simone and Mavura, Daudi and Ng’ambi, Wingston and Ngongonda, Dingase Faith and Pouly, Marc and Rakotoarisaona, Mendrika Fifaliana and Rapelanoro Rabenja, Fahafahantsoa and Traoré, Ibrahima and Navarini, Alexander A. }, title = { { PASSION for Dermatology: Bridging the Diversity Gap with Pigmented Skin Images from Sub-Saharan Africa } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Africa faces a huge shortage of dermatologists, with less than one per million people. This is in stark contrast to the high demand for dermatologic care, with 80% of the paediatric population suffering from largely untreated skin conditions. The integration of AI into healthcare sparks significant hope for treatment accessibility, especially through the development of AI-supported teledermatology. Current AI models are predominantly trained on white-skinned patients and do not general- ize well enough to pigmented patients. The PASSION project aims to address this issue by collecting images of skin diseases in Sub-Saharan countries with the aim of open-sourcing this data. This dataset is the first of its kind, consisting of 1,653 patients for a total of 4,901 images. The images are representative of telemedicine settings and encompass the most common paediatric conditions: eczema, fungals, scabies, and impetigo. We also provide a baseline machine learning model trained on the dataset and a detailed performance analysis for the subpopula- tions represented in the dataset. The project website can be found at https://passionderm.github.io/.
PASSION for Dermatology: Bridging the Diversity Gap with Pigmented Skin Images from Sub-Saharan Africa
[ "Gottfrois, Philippe", "Gröger, Fabian", "Andriambololoniaina, Faly Herizo", "Amruthalingam, Ludovic", "Gonzalez-Jimenez, Alvaro", "Hsu, Christophe", "Kessy, Agnes", "Lionetti, Simone", "Mavura, Daudi", "Ng’ambi, Wingston", "Ngongonda, Dingase Faith", "Pouly, Marc", "Rakotoarisaona, Mendrika Fifaliana", "Rapelanoro Rabenja, Fahafahantsoa", "Traoré, Ibrahima", "Navarini, Alexander A." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
324
null
https://papers.miccai.org/miccai-2024/paper/0297_paper.pdf
@InProceedings{ Bas_Quest_MICCAI2024, author = { Basak, Hritam and Yin, Zhaozheng }, title = { { Quest for Clone: Test-time Domain Adaptation for Medical Image Segmentation by Searching the Closest Clone in Latent Space } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Unsupervised Domain Adaptation (UDA) aims to align labeled source distribution and unlabeled target distribution by mining domain-agnostic feature representation. However, adapting the source-trained model for new target domains after the model is deployed to users poses a significant challenge. To address this, we propose a generative latent search paradigm to reconstruct the closest clone of every target image from the source latent space. This involves utilizing a test-time adaptation (TTA) strategy, wherein a latent optimization step finds the closest clone of each target image from the source representation space using variational sampling of source latent distribution. Thus, our method facilitates domain adaptation without requiring target-domain supervision during training. Moreover, we demonstrate that our approach can be further fine-tuned using a few labeled target data without the need for unlabeled target data, by leveraging global and local label guidance from available target annotations to enhance the downstream segmentation task. We empirically validate the efficacy of our proposed method, surpassing existing UDA, TTA, and SSDA methods in two domain adaptive image segmentation tasks. Code is available at \href{https://github.com/hritam-98/Quest4Clone}{GitHub}
Quest for Clone: Test-time Domain Adaptation for Medical Image Segmentation by Searching the Closest Clone in Latent Space
[ "Basak, Hritam", "Yin, Zhaozheng" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
325
null
https://papers.miccai.org/miccai-2024/paper/1644_paper.pdf
@InProceedings{ Lia_Enhancing_MICCAI2024, author = { Liang, Peixian and Zheng, Hao and Li, Hongming and Gong, Yuxin and Bakas, Spyridon and Fan, Yong }, title = { { Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.
Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning
[ "Liang, Peixian", "Zheng, Hao", "Li, Hongming", "Gong, Yuxin", "Bakas, Spyridon", "Fan, Yong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
326
null
https://papers.miccai.org/miccai-2024/paper/1763_paper.pdf
@InProceedings{ Zho_Gait_MICCAI2024, author = { Zhou, Zirui and Liang, Junhao and Peng, Zizhao and Fan, Chao and An, Fengwei and Yu, Shiqi }, title = { { Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Scoliosis poses significant diagnostic challenges, particularly in adolescents, where early detection is crucial for effective treatment. Traditional diagnostic and follow-up methods, which rely on physical examinations and radiography, face limitations due to the need for clinical expertise and the risk of radiation exposure, thus restricting their use for widespread early screening. In response, we introduce a novel, video-based, non-invasive method for scoliosis classification using gait analysis, which circumvents these limitations. This study presents Scoliosis1K, the first large-scale dataset tailored for video-based scoliosis classification, encompassing over one thousand adolescents. Leveraging this dataset, we developed ScoNet, an initial model that encountered challenges in dealing with the complexities of real-world data. This led to the creation of ScoNet-MT, an enhanced model incorporating multi-task learning, which exhibits promising diagnostic accuracy for application purposes. Our findings demonstrate that gait can be a non-invasive biomarker for scoliosis, revolutionizing screening practices with deep learning and setting a precedent for non-invasive diagnostic methodologies. The dataset and code are publicly available at \url{https://zhouzi180.github.io/Scoliosis1K/}.
Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis
[ "Zhou, Zirui", "Liang, Junhao", "Peng, Zizhao", "Fan, Chao", "An, Fengwei", "Yu, Shiqi" ]
Conference
2407.05726
[ "https://github.com/shiqiyu/opengait" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
327
null
https://papers.miccai.org/miccai-2024/paper/0578_paper.pdf
@InProceedings{ Sun_PositionGuided_MICCAI2024, author = { Sun, Zhichao and Gu, Yuliang and Liu, Yepeng and Zhang, Zerui and Zhao, Zhou and Xu, Yongchao }, title = { { Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Anomaly detection in chest X-rays is a critical task. Most methods mainly model the distribution of normal images, and then regard significant deviation from normal distribution as anomaly. Recently, CLIP-based methods, pre-trained on a large number of medical images, have shown impressive performance on zero/few-shot downstream tasks. In this paper, we aim to explore the potential of CLIP-based methods for anomaly detection in chest X-rays. Considering the discrepancy between the CLIP pre-training data and the task-specific data, we propose a position-guided prompt learning method. Specifically, inspired by the fact that experts diagnose chest X-rays by carefully examining distinct lung regions, we propose learnable position-guided text and image prompts to adapt the task data to the frozen pre-trained CLIP-based model. To enhance the model’s discriminative capability, we propose a novel structure-preserving anomaly synthesis method within chest x-rays during the training process. Extensive experiments on three datasets demonstrate that our proposed method outperforms some state-of-the-art methods. The code of our implementation is available at https://github.com/sunzc-sunny/PPAD.
Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays
[ "Sun, Zhichao", "Gu, Yuliang", "Liu, Yepeng", "Zhang, Zerui", "Zhao, Zhou", "Xu, Yongchao" ]
Conference
2405.11976
[ "https://github.com/sunzc-sunny/PPAD" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
328
null
https://papers.miccai.org/miccai-2024/paper/0515_paper.pdf
@InProceedings{ Yan_Region_MICCAI2024, author = { Yang, Zhiwen and Chen, Haowei and Qian, Ziniu and Zhou, Yang and Zhang, Hui and Zhao, Dan and Wei, Bingzheng and Xu, Yan }, title = { { Region Attention Transformer for Medical Image Restoration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Transformer-based methods have demonstrated impressive results in medical image restoration, attributed to the multi-head self-attention (MSA) mechanism in the spatial dimension. However, the majority of existing Transformers conduct attention within fixed and coarsely partitioned regions (\text{e.g.} the entire image or fixed patches), resulting in interference from irrelevant regions and fragmentation of continuous image content. To overcome these challenges, we introduce a novel Region Attention Transformer (RAT) that utilizes a region-based multi-head self-attention mechanism (R-MSA). The R-MSA dynamically partitions the input image into non-overlapping semantic regions using the robust Segment Anything Model (SAM) and then performs self-attention within these regions. This region partitioning is more flexible and interpretable, ensuring that only pixels from similar semantic regions complement each other, thereby eliminating interference from irrelevant regions. Moreover, we introduce a focal region loss to guide our model to adaptively focus on recovering high-difficulty regions. Extensive experiments demonstrate the effectiveness of RAT in various medical image restoration tasks, including PET image synthesis, CT image denoising, and pathological image super-resolution. Code is available at \href{https://github.com/Yaziwel/Region-Attention-Transformer-for-Medical-Image-Restoration.git}{https://github.com/RAT}.
Region Attention Transformer for Medical Image Restoration
[ "Yang, Zhiwen", "Chen, Haowei", "Qian, Ziniu", "Zhou, Yang", "Zhang, Hui", "Zhao, Dan", "Wei, Bingzheng", "Xu, Yan" ]
Conference
2407.09268
[ "https://github.com/Yaziwel/Region-Attention-Transformer-for-Medical-Image-Restoration.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
329
null
https://papers.miccai.org/miccai-2024/paper/0931_paper.pdf
@InProceedings{ Li_From_MICCAI2024, author = { Li, Wuyang and Liu, Xinyu and Yang, Qiushi and Yuan, Yixuan }, title = { { From Static to Dynamic Diagnostics: Boosting Medical Image Analysis via Motion-Informed Generative Videos } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
In the field of intelligent healthcare, the accessibility of medical data is severely constrained by privacy concerns, high costs, and limited patient cases, which significantly hinder the development of diagnostic models for qualified clinical assistance. Though previous efforts have been made to synthesize medical images via generative models, they are limited to static imagery that fails to capture the dynamic motions in clinical practice, such as contractile patterns of organ walls, leading to vulnerable prediction in diagnostics. To tackle this issue, we propose a holistic paradigm, VidMotion, to boost medical image analysis with generative medical videos, representing the first exploration in this field. VidMotion consists of a Motion-guided Unbiased Enhancement (MUE) to augment static images into dynamic videos at the data level and a Motion-aware Collaborative Learning (MCL) module to learn with images and generated videos jointly at the model level. Specifically, MUE first transforms medical images into generative videos enriched with diverse clinical motions, which are guided by image-to-video generative foundation models. Then, to avoid the potential clinical bias caused by the imbalanced generative videos, we design an unbiased sampling strategy informed by the class distribution prior statistically, thereby extracting high-quality video frames. In MCL, we perform joint learning with the image and video representation, including a video-to-image distillation and image-to-image consistency, to fully capture the intrinsic motion semantics for motion-informed diagnosis. We validate our method on extensive semi-supervised learning benchmarks and justify that VidMotion is highly effective and efficient, outperforming state-of-the-art approaches significantly. The code will be released to push forward the community.
From Static to Dynamic Diagnostics: Boosting Medical Image Analysis via Motion-Informed Generative Videos
[ "Li, Wuyang", "Liu, Xinyu", "Yang, Qiushi", "Yuan, Yixuan" ]
Conference
[ "https://github.com/CUHK-AIM-Group/VidMotion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
330
null
https://papers.miccai.org/miccai-2024/paper/2368_paper.pdf
@InProceedings{ Sah_FedMRL_MICCAI2024, author = { Sahoo, Pranab and Tripathi, Ashutosh and Saha, Sriparna and Mondal, Samrat }, title = { { FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep Reinforcement Learning for Medical Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Despite recent advancements in federated learning (FL) for medical image diagnosis, addressing data heterogeneity among clients remains a significant challenge for practical implementation. A primary hurdle in FL arises from the non-IID nature of data samples across clients, which typically results in a decline in the performance of the aggregated global model. In this study, we introduce FedMRL, a novel federated multi-agent deep reinforcement learning framework designed to address data heterogeneity. FedMRL incorporates a novel loss function to facilitate fairness among clients, preventing bias in the final global model. Additionally, it employs a multi-agent reinforcement learning (MARL) approach to calculate the proximal term (μ) for the personalized local objective function, ensuring convergence to the global optimum. Furthermore, FedMRL integrates an adaptive weight adjustment method using a Self-organizing map (SOM) on the server side to counteract distribution shifts among clients’ local data distributions. We assess our approach using two publicly available real-world medical datasets, and the results demonstrate that FedMRL significantly outperforms state-of-the-art techniques, showing its efficacy in addressing data heterogeneity in federated learning.
FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep Reinforcement Learning for Medical Imaging
[ "Sahoo, Pranab", "Tripathi, Ashutosh", "Saha, Sriparna", "Mondal, Samrat" ]
Conference
2407.05800
[ "https://github.com/Pranabiitp/FedMRL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
331
null
https://papers.miccai.org/miccai-2024/paper/3759_paper.pdf
@InProceedings{ Rad_ThyGraph_MICCAI2024, author = { Radhachandran, Ashwath and Vittalam, Alekhya and Ivezic, Vedrana and Sant, Vivek and Athreya, Shreeram and Moleta, Chace and Patel, Maitraya and Masamed, Rinat and Arnold, Corey and Speier, William }, title = { { ThyGraph: A Graph-Based Approach for Thyroid Nodule Diagnosis from Ultrasound Studies } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Improved thyroid nodule risk stratification from ultrasound (US) can mitigate overdiagnosis and unnecessary biopsies. Previous studies often train deep learning models using manually selected single US frames; these approaches deviate from clinical practice where physicians utilize multiple image views for diagnosis. This paper introduces ThyGraph, a novel graph-based approach that improves feature aggregation and correlates anatomically proximate images, by leveraging spatial information to model US image studies as patient-level graphs. Graph convolutional networks are trained on image-based and patch-based graphs generated from 505 US image studies to predict nodule malignancy. Self-attention graph pooling is introduced to produce a node-level interpretability metric that is visualized downstream to identify important inputs. Our best performing model demonstrated an AUROC of 0.866±0.019 and AUPRC of 0.749±0.043 across five-fold cross validation, significantly outperforming two previously published attention-based feature aggregation networks. These previous studies fail to account for spatial dependencies by modeling images within a study as independent, uncorrelated instances. In the proposed graph paradigm, ThyGraph can effectively aggregate information across views of a nodule and take advantage of inter-image dependencies to improve nodule risk stratification, leading to better patient triaging and reducing reliance on biopsies. Code is available at https://github.com/ashwath-radha/ThyGraph.
ThyGraph: A Graph-Based Approach for Thyroid Nodule Diagnosis from Ultrasound Studies
[ "Radhachandran, Ashwath", "Vittalam, Alekhya", "Ivezic, Vedrana", "Sant, Vivek", "Athreya, Shreeram", "Moleta, Chace", "Patel, Maitraya", "Masamed, Rinat", "Arnold, Corey", "Speier, William" ]
Conference
[ "https://github.com/ashwath-radha/ThyGraph" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
332
null
https://papers.miccai.org/miccai-2024/paper/0781_paper.pdf
@InProceedings{ Xia_Generalizing_MICCAI2024, author = { Xia, Peng and Hu, Ming and Tang, Feilong and Li, Wenxue and Zheng, Wenhao and Ju, Lie and Duan, Peibo and Yao, Huaxiu and Ge, Zongyuan }, title = { { Generalizing to Unseen Domains in Diabetic Retinopathy with Disentangled Representations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Diabetic Retinopathy (DR), induced by diabetes, poses a significant risk of visual impairment. Accurate and effective grading of DR aids in the treatment of this condition. Yet existing models experience notable performance degradation on unseen domains due to domain shifts. Previous methods address this issue by simulating domain style through simple visual transformation and mitigating domain noise via learning robust representations. However, domain shifts encompass more than image styles. They overlook biases caused by implicit factors such as ethnicity, age, and diagnostic criteria. In our work, we propose a novel framework where representations of paired data from different domains are decoupled into semantic features and domain noise. The resulting augmented representation comprises original retinal semantics and domain noise from other domains, aiming to generate enhanced representations aligned with real-world clinical needs, incorporating rich information from diverse domains. Subsequently, to improve the robustness of the decoupled representations, class and domain prototypes are employed to interpolate the disentangled representations, and data-aware weights are designed to focus on rare classes and domains. Finally, we devise a robust pixel-level semantic alignment loss to align retinal semantics decoupled from features, maintaining a balance between intra-class diversity and dense class features. Experimental results on multiple benchmarks demonstrate the effectiveness of our method on unseen domains. The code implementations are accessible on https://github.com/richard-peng-xia/DECO.
Generalizing to Unseen Domains in Diabetic Retinopathy with Disentangled Representations
[ "Xia, Peng", "Hu, Ming", "Tang, Feilong", "Li, Wenxue", "Zheng, Wenhao", "Ju, Lie", "Duan, Peibo", "Yao, Huaxiu", "Ge, Zongyuan" ]
Conference
2406.06384
[ "https://github.com/richard-peng-xia/DECO" ]
https://huggingface.co/papers/2406.06384
1
1
0
9
[]
[]
[]
[]
[]
[]
1
Poster
333
null
https://papers.miccai.org/miccai-2024/paper/0588_paper.pdf
@InProceedings{ Dak_On_MICCAI2024, author = { Dakri, Abdelmouttaleb and Arora, Vaibhav and Challier, Léo and Keller, Marilyn and Black, Michael J. and Pujades, Sergi }, title = { { On predicting 3D bone locations inside the human body } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Knowing the precise location of the bones inside the human body is key in several medical tasks, such as patient placement inside an imaging device or surgical navigation inside a patient. Our goal is to predict the bone locations using only an external 3D body surface observation. Existing approaches either validate their predictions on 2D data (X-rays) or with pseudo-ground truth computed from motion capture using biomechanical models. Thus, methods either suffer from a 3D-2D projection ambiguity or directly lack validation on clinical imaging data. In this work, we start with a dataset of segmented skin and long bones obtained from 3D full body MRI images that we refine into individual bone segmentations. To learn the skin to bones correlations, one needs to register the paired data. Few anatomical models allow to register a skeleton and the skin simultaneously. One such method, SKEL, has a skin and skeleton that is jointly rigged with the same pose parameters. However, it lacks the flexibility to adjust the bone locations inside its skin. To address this, we extend SKEL into SKEL-J to allow its bones to fit the segmented bones while its skin fits the segmented skin. These precise fits allow us to train SKEL-J to more accurately infer the anatomical joint locations from the skin surface. Our qualitative and quantitative results show how our bone location predictions are more accurate than all existing approaches. To foster future research, we make available for research purposes the individual bone segmentations, the fitted SKEL-J models as well as the new inference methods at https://3dbones.is.tue.mpg.de.
On predicting 3D bone locations inside the human body
[ "Dakri, Abdelmouttaleb", "Arora, Vaibhav", "Challier, Léo", "Keller, Marilyn", "Black, Michael J.", "Pujades, Sergi" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
334
null
https://papers.miccai.org/miccai-2024/paper/1489_paper.pdf
@InProceedings{ Qin_DBSAM_MICCAI2024, author = { Qin, Chao and Cao, Jiale and Fu, Huazhu and Shahbaz Khan, Fahad and Anwer, Rao Muhammad }, title = { { DB-SAM: Delving into High Quality Universal Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Recently, the Segment Anything Model (SAM) has demonstrated promising segmentation capabilities in a variety of downstream segmentation tasks. However in the context of universal medical image segmentation there exists a notable performance discrepancy when directly applying SAM due to the domain gap between natural and 2D/3D medical data. In this work, we propose a dual-branch adapted SAM framework, named DB-SAM, that strives to effectively bridge this domain gap. Our dual-branch adapted SAM contains two branches in parallel: a ViT branch and a convolution branch. The ViT branch incorporates a learnable channel attention block after each frozen attention block, which captures domain-specific local features. On the other hand, the convolution branch employs a light-weight convolutional block to extract domain-specific shallow features from the input medical image. To perform cross-branch feature fusion, we design a bilateral cross-attention block and a ViT convolution fusion block, which dynamically combine diverse information of two branches for mask decoder. Extensive experiments on large-scale medical image dataset with various 3D and 2D medical segmentation tasks reveal the merits of our proposed contributions. On 21 3D medical image segmentation tasks, our proposed DB-SAM achieves an absolute gain of 8.8\%, compared to a recent medical SAM adapter in the literature. Our code and models will be publicly released.
DB-SAM: Delving into High Quality Universal Medical Image Segmentation
[ "Qin, Chao", "Cao, Jiale", "Fu, Huazhu", "Shahbaz Khan, Fahad", "Anwer, Rao Muhammad" ]
Conference
2410.04172
[ "https://github.com/AlfredQin/DB-SAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
335
null
https://papers.miccai.org/miccai-2024/paper/4075_paper.pdf
@InProceedings{ Che_TransWindow_MICCAI2024, author = { Chen, Jiahe and Kobayashi, Etsuko and Sakuma, Ichiro and Tomii, Naoki }, title = { { Trans-Window Panoramic Impasto for Online Tissue Deformation Recovery } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Deformation recovery from laparoscopic images benefits many downstream applications like robot planning, intraoperative navigation and surgical safety assessment. We define tissue deformation as time-variant surface structure and displacement. Besides, we also pay attention to the surface strain, which bridges the visual observation and the tissue biomechanical status, for which continuous pointwise surface mapping and tracking are necessary. Previous SLAM-based methods cannot cope with instrument-induced occlusion and severe scene deformation, while the neural field-based ones are offline and scene-specific, which hinders their application in continuous mapping. Moreover, neither approach meets the requirement of continuous pointwise tracking. To overcome these limitations, we assume a deformable environment and a movable window through which an observer depicts the environment’s 3D structure on a canonical canvas as maps in a process named impasto. The observer performs panoramic impasto for the currently and previously observed 3D structure in a two-step online approach: optimization and fusion. The optimization of the maps compensates for the error in the observation of the structure and the tracking by preserving spatiotemporal smoothness, while the fusion is for merging the estimated and the newly observed maps by ensuring visibility. Experiments were conducted using ex vivo and in vivo stereo laparoscopic datasets where tool-tissue interaction occurs and large camera motion exists. Results demonstrate that the proposed online method is robust to instrument-induced occlusion, capable of estimating surface strain, and can continuously reconstruct and track surface points regardless of camera motion. Code is available at: https://github.com/bmpelab/trans_window_panoramic_impasto.git
Trans-Window Panoramic Impasto for Online Tissue Deformation Recovery
[ "Chen, Jiahe", "Kobayashi, Etsuko", "Sakuma, Ichiro", "Tomii, Naoki" ]
Conference
[ "https://github.com/bmpelab/trans_window_panoramic_impasto.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
336
null
https://papers.miccai.org/miccai-2024/paper/0843_paper.pdf
@InProceedings{ Liu_When_MICCAI2024, author = { Liu, Yifan and Li, Wuyang and Wang, Cheng and Chen, Hui and Yuan, Yixuan }, title = { { When 3D Partial Points Meets SAM: Tooth Point Cloud Segmentation with Sparse Labels } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Tooth point cloud segmentation is a fundamental task in many orthodontic applications. Current research mainly focuses on fully supervised learning which demands expensive and tedious manual point-wise annotation. Although recent weakly-supervised alternatives are proposed to use weak labels for 3D segmentation and achieve promising results, they tend to fail when the labels are extremely sparse. Inspired by the powerful promptable segmentation capability of the Segment Anything Model (SAM), we propose a framework named SAMTooth that leverages such capacity to complement the extremely sparse supervision. To automatically generate appropriate point prompts for SAM, we propose a novel Confidence-aware Prompt Generation strategy, where coarse category predictions are aggregated with confidence-aware filtering. Furthermore, to fully exploit the structural and shape clues in SAM’s outputs for assisting the 3D feature learning, we advance a Mask-guided Representation Learning that re-projects the generated tooth masks of SAM into 3D space and constrains these points of different teeth to possess distinguished representations. To demonstrate the effectiveness of the framework, we conduct experiments on the public dataset and surprisingly find with only 0.1\% annotations (one point per tooth), our method can surpass recent weakly supervised methods by a large margin, and the performance is even comparable to the recent fully-supervised methods, showcasing the significant potential of applying SAM to 3D perception tasks with sparse labels. Code is available at https://github.com/CUHK-AIM-Group/SAMTooth.
When 3D Partial Points Meets SAM: Tooth Point Cloud Segmentation with Sparse Labels
[ "Liu, Yifan", "Li, Wuyang", "Wang, Cheng", "Chen, Hui", "Yuan, Yixuan" ]
Conference
2409.01691
[ "https://github.com/CUHK-AIM-Group/SAMTooth" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
337
null
https://papers.miccai.org/miccai-2024/paper/0651_paper.pdf
@InProceedings{ Zhu_Semisupervised_MICCAI2024, author = { Zhu, Ruiyun and Oda, Masahiro and Hayashi, Yuichiro and Kitasaka, Takayuki and Mori, Kensaku }, title = { { Semi-supervised Tubular Structure Segmentation with Cross Geometry and Hausdorff Distance Consistency } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
This study introduces a novel semi-supervised method for 3D segmentation of tubular structures. Complete and automated segmentation of complex tubular structures in medical imaging remains a challenging task. Traditional supervised deep learning methods often demand a tremendous number of annotated data to train the deep model, with the high cost and difficulty of obtaining annotations. To address this, a semi-supervised approach could be a viable solution. Segmenting complex tubular structures with limited annotated data remains a formidable challenge. Many semi-supervised techniques rely on pseudo-labeling, which involves generating labels for unlabeled images based on predictions from a model trained on labeled data. Besides, several semi-supervised learning methods are proposed based on data-level consistency, which enforces consistent predictions by applying perturbations to input images. However, these methods tend to overlook the geometric shape characteristics of the segmentation targets. In our research, we introduce a task-level consistency learning approach that incorporates cross geometry consistency and the Hausdorff distance consistency, taking advantage of the geometric shape properties of both labeled and unlabeled data. Our deep learning model generates both a segmentation map and a distance transform map. By applying the proposed consistency, we ensure that the geometric shapes in both maps align closely, thereby enhancing the accuracy and performance of tubular structure segmentation. We tested our method on airway segmentation in 3D CT scans, where it outperformed the recent state-of-the-art methods, showing an 88.4% tree length detected rate, 82.8% branch detected rate, and 89.7% precision rate.
Semi-supervised Tubular Structure Segmentation with Cross Geometry and Hausdorff Distance Consistency
[ "Zhu, Ruiyun", "Oda, Masahiro", "Hayashi, Yuichiro", "Kitasaka, Takayuki", "Mori, Kensaku" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
338
null
https://papers.miccai.org/miccai-2024/paper/1964_paper.pdf
@InProceedings{ Zhu_LoCIDiffCom_MICCAI2024, author = { Zhu, Zihao and Tao, Tianli and Tao, Yitian and Deng, Haowen and Cai, Xinyi and Wu, Gaofeng and Wang, Kaidong and Tang, Haifeng and Zhu, Lixuan and Gu, Zhuoyang and Shen, Dinggang and Zhang, Han }, title = { { LoCI-DiffCom: Longitudinal Consistency-Informed Diffusion Model for 3D Infant Brain Image Completion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
The infant brain undergoes rapid development in the first few years after birth. Compared to cross-sectional studies, longitudinal studies can depict the trajectories of infants’ brain development with higher accuracy, statistical power and flexibility. However, the collection of infant longitudinal magnetic resonance (MR) data suffers a notorious dropout problem, resulting in incomplete datasets with missing time points. This limitation significantly impedes subsequent neuroscience and clinical modeling. Yet, existing deep generative models are facing difficulties in missing brain image completion, due to sparse data and the nonlinear, dramatic contrast/geometric variations in the developing brain. We propose LoCI-DiffCom, a novel Longitudinal Consistency-Informed Diffusion model for infant brain image Completion, which integrates the images from preceding and subsequent time points to guide a diffusion model for generating high-fidelity missing data. Our designed LoCI module can work on highly sparse sequences, relying solely on data from two temporal points. Despite wide separation and diversity between age time points, our approach can extract individualized developmental features while ensuring context-aware consistency. Our experiments on a large infant brain MR dataset demonstrate its effectiveness with consistent performance on missing infant brain MR completion even in big gap scenarios, aiding in better delineation of early developmental trajectories.
LoCI-DiffCom: Longitudinal Consistency-Informed Diffusion Model for 3D Infant Brain Image Completion
[ "Zhu, Zihao", "Tao, Tianli", "Tao, Yitian", "Deng, Haowen", "Cai, Xinyi", "Wu, Gaofeng", "Wang, Kaidong", "Tang, Haifeng", "Zhu, Lixuan", "Gu, Zhuoyang", "Shen, Dinggang", "Zhang, Han" ]
Conference
2405.10691
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
339
null
https://papers.miccai.org/miccai-2024/paper/1060_paper.pdf
@InProceedings{ Kal_Unsupervised_MICCAI2024, author = { Kalkhof, John and Ranem, Amin and Mukhopadhyay, Anirban }, title = { { Unsupervised Training of Neural Cellular Automata on Edge Devices } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
The disparity in access to machine learning tools for medical imaging across different regions significantly limits the potential for universal healthcare innovation, particularly in remote areas. Our research addresses this issue by implementing Neural Cellular Automata (NCA) training directly on smartphones for accessible X-ray lung segmentation. We confirm the practicality and feasibility of deploying and training these advanced models on five Android devices, improving medical diagnostics accessibility and bridging the tech divide to extend machine learning benefits in medical imaging to low- and middle-income countries. We further enhance this approach with an unsupervised adaptation method using the novel Variance-Weighted Segmentation Loss (VWSL), which efficiently learns from unlabeled data by minimizing the variance from multiple NCA predictions. This strategy notably improves model adaptability and performance across diverse medical imaging contexts without the need for extensive computational resources or labeled datasets, effectively lowering the participation threshold. Our methodology, tested on three multisite X-ray datasets—Padchest, ChestX-ray8, and MIMIC-III—demonstrates improvements in segmentation Dice accuracy by 0.7 to 2.8%, compared to the classic Med-NCA. Additionally, in extreme cases where no digital copy is available and images must be captured by a phone from an X-ray lightbox or monitor, VWSL enhances Dice accuracy by 5-20%, demonstrating the method’s robustness even with suboptimal image sources.
Unsupervised Training of Neural Cellular Automata on Edge Devices
[ "Kalkhof, John", "Ranem, Amin", "Mukhopadhyay, Anirban" ]
Conference
2407.18114
[ "https://github.com/MECLabTUDA/M3D-NCA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
340
null
https://papers.miccai.org/miccai-2024/paper/0802_paper.pdf
@InProceedings{ Xie_SimTxtSeg_MICCAI2024, author = { Xie, Yuxin and Zhou, Tao and Zhou, Yi and Chen, Geng }, title = { { SimTxtSeg: Weakly-Supervised Medical Image Segmentation with Simple Text Cues } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Weakly-supervised medical image segmentation is a challenging task that aims to reduce the annotation cost while keep the segmentation performance. In this paper, we present a novel framework, SimTxtSeg, that leverages simple text cues to generate high-quality pseudo-labels and study the cross-modal fusion in training segmentation models, simultaneously. Our contribution consists of two key components: an effective Textual-to-Visual Cue Converter that produces visual prompts from text prompts on medical images, and a text-guided segmentation model with Text-Vision Hybrid Attention that fuses text and image features. We evaluate our framework on two medical image segmentation tasks: colonic polyp segmentation and MRI brain tumor segmentation, and achieve consistent state-of-the-art performance. Source code is available at: https://github.com/xyx1024/SimTxtSeg.
SimTxtSeg: Weakly-Supervised Medical Image Segmentation with Simple Text Cues
[ "Xie, Yuxin", "Zhou, Tao", "Zhou, Yi", "Chen, Geng" ]
Conference
2406.19364
[ "https://github.com/xyx1024/SimTxtSeg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
341
null
https://papers.miccai.org/miccai-2024/paper/2751_paper.pdf
@InProceedings{ Wan_Progressively_MICCAI2024, author = { Wang, Yaqi and Cao, Peng and Hou, Qingshan and Lan, Linqi and Yang, Jinzhu and Liu, Xiaoli and Zaiane, Osmar R. }, title = { { Progressively Correcting Soft Labels via Teacher Team for Knowledge Distillation in Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
State-of-the-art knowledge distillation (KD) methods aim to capture the underlying information within the teacher and explore effective strategies for knowledge transfer. However, due to challenges such as blurriness, noise, and low contrast inherent in medical images, the teacher’s predictions (soft labels) may also include false information, thus potentially misguiding the student’s learning process. Addressing this, we pioneer a novel correction-based KD approach (PLC-KD) and introduce two assistants for perceiving and correcting the false soft labels. More specifically, the false-pixel-aware assistant targets global error correction, while the boundary-aware assistant focuses on lesion boundary errors. Additionally, a similarity-based correction scheme is designed to forcefully rectify the remaining hard false pixels. Through this collaborative effort, teacher team (comprising a teacher and two assistants) progressively generates more accurate soft labels, ensuring the “all-correct” final soft labels for student guidance during KD. Extensive experimental results demonstrate that the proposed PLC-KD framework attains superior performance to state-of-the-art methods on three challenging medical segmentation tasks.
Progressively Correcting Soft Labels via Teacher Team for Knowledge Distillation in Medical Image Segmentation
[ "Wang, Yaqi", "Cao, Peng", "Hou, Qingshan", "Lan, Linqi", "Yang, Jinzhu", "Liu, Xiaoli", "Zaiane, Osmar R." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
342
null
https://papers.miccai.org/miccai-2024/paper/2811_paper.pdf
@InProceedings{ Jeo_Uncertaintyaware_MICCAI2024, author = { Jeong, Minjae and Cho, Hyuna and Jung, Sungyoon and Kim, Won Hwa }, title = { { Uncertainty-aware Diffusion-based Adversarial Attack for Realistic Colonoscopy Image Synthesis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Automated semantic segmentation in colonoscopy is crucial for detecting colon polyps and preventing the development of colorectal cancer. However, the scarcity of annotated data presents a challenge to the segmentation task. Recent studies address this data scarcity issue with data augmentation techniques such as perturbing data with adversarial noises or using a generative model to sample unseen images from a learned data distribution. The perturbation approach controls the level of data ambiguity to expand discriminative regions but the augmented noisy images exhibit a lack of diversity. On the other hand, generative models yield diverse realistic images but they cannot directly control the data ambiguity. Therefore, we propose Diffusion-based Adversarial attack for Semantic segmentation considering Pixel-level uncertainty (DASP), which incorporates both the controllability of ambiguity in adversarial attack and the data diversity of generative models. Using a hierarchical mask-to-image generation scheme, our method generates both expansive labels and their corresponding images that exhibit diversity and realism. Also, our method controls the magnitude of adversarial attack per pixel considering its uncertainty such that a network prioritizes learning on challenging pixels. The effectivity of our method is extensively validated on two public polyp segmentation benchmarks with four backbone networks, demonstrating its superiority over eleven baselines.
Uncertainty-aware Diffusion-based Adversarial Attack for Realistic Colonoscopy Image Synthesis
[ "Jeong, Minjae", "Cho, Hyuna", "Jung, Sungyoon", "Kim, Won Hwa" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
343
null
https://papers.miccai.org/miccai-2024/paper/0236_paper.pdf
@InProceedings{ Sei_Spatial_MICCAI2024, author = { Seibold, Matthias and Bahari Malayeri, Ali and Fürnstahl, Philipp }, title = { { Spatial Context Awareness in Surgery through Sound Source Localization } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Context awareness and scene understanding is an integral component for the development of intelligent systems in computer-aided and robotic surgery. While most systems primarily utilize visual data for scene understanding, recent proof-of-concepts have showcased the potential of acoustic signals for the detection and analysis of surgical activity that is associated with typical noise emissions. However, acoustic approaches have not yet been effectively employed for localization tasks in surgery, which are crucial to obtain a comprehensive understanding of a scene. In this work, we introduce the novel concept of Sound Source Localization (SSL) for surgery which can reveal acoustic activity and its location in the surgical field, therefore providing insight into the interactions of surgical staff with the patient and medical equipment.
Spatial Context Awareness in Surgery through Sound Source Localization
[ "Seibold, Matthias", "Bahari Malayeri, Ali", "Fürnstahl, Philipp" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
344
null
https://papers.miccai.org/miccai-2024/paper/3008_paper.pdf
@InProceedings{ Hu_Perspective_MICCAI2024, author = { Hu, Jintong and Chen, Siyan and Pan, Zhiyi and Zeng, Sen and Yang, Wenming }, title = { { Perspective+ Unet: Enhancing Segmentation with Bi-Path Fusion and Efficient Non-Local Attention for Superior Receptive Fields } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Precise segmentation of medical images is fundamental for extracting critical clinical information, which plays a pivotal role in enhancing the accuracy of diagnoses, formulating effective treatment plans, and improving patient outcomes. Although Convolutional Neural Networks (CNNs) and non-local attention methods have achieved notable success in medical image segmentation, they either struggle to capture long-range spatial dependencies due to their reliance on local features, or face significant computational and feature integration challenges when attempting to address this issue with global attention mechanisms. To overcome existing limitations in medical image segmentation, we propose a novel architecture, Perspective+ Unet. This framework is characterized by three major innovations: (i) It introduces a dual-pathway strategy at the encoder stage that combines the outcomes of traditional and dilated convolutions. This not only maintains the local receptive field but also significantly expands it, enabling better comprehension of the global structure of images while retaining detail sensitivity. (ii) The framework incorporates an efficient non-local transformer block, named ENLTB, which utilizes kernel function approximation for effective long-range dependency capture with linear computational and spatial complexity. (iii) A Spatial Cross-Scale Integrator strategy is employed to merge global dependencies and local contextual cues across model stages, meticulously refining features from various levels to harmonize global and local information. Experimental results on the ACDC and Synapse datasets demonstrate the effectiveness of our proposed Perspective+ Unet. The code is available in the supplementary material.
Perspective+ Unet: Enhancing Segmentation with Bi-Path Fusion and Efficient Non-Local Attention for Superior Receptive Fields
[ "Hu, Jintong", "Chen, Siyan", "Pan, Zhiyi", "Zeng, Sen", "Yang, Wenming" ]
Conference
2406.14052
[ "https://github.com/tljxyys/Perspective-Unet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
345
null
https://papers.miccai.org/miccai-2024/paper/2469_paper.pdf
@InProceedings{ Pod_HDilemma_MICCAI2024, author = { Podobnik, Gašper and Vrtovec, Tomaž }, title = { { HDilemma: Are Open-Source Hausdorff Distance Implementations Equivalent? } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Quantitative performance metrics play a pivotal role in medical imaging by offering critical insights into method performance and facilitating objective method comparison. Recently, platforms providing recommendations for metrics selection as well as resources for evaluating methods through computational challenges and online benchmarking have emerged, with an inherent assumption that metrics implementations are consistent across studies and equivalent throughout the community. In this study, we question this assumption by reviewing five different open-source implementations for computing the Hausdorff distance (HD), a boundary-based metric commonly used for assessing the performance of semantic segmentation. Despite sharing a single generally accepted mathematical definition, our experiments reveal notable systematic differences in the HD and its 95th percentile variant across implementations when applied to clinical segmentations with varying voxel sizes, which fundamentally impacts and constrains the ability to objectively compare results across different studies. Our findings should encourage the medical imaging community towards standardizing the implementation of the HD computation, so as to foster objective, reproducible and consistent comparisons when reporting performance results.
HDilemma: Are Open-Source Hausdorff Distance Implementations Equivalent?
[ "Podobnik, Gašper", "Vrtovec, Tomaž" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
346
null
https://papers.miccai.org/miccai-2024/paper/2447_paper.pdf
@InProceedings{ Bui_FALFormer_MICCAI2024, author = { Bui, Doanh C. and Vuong, Trinh Thi Le and Kwak, Jin Tae }, title = { { FALFormer: Feature-aware Landmarks self-attention for Whole-slide Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Slide-level classification for whole-slide images (WSIs) has been widely recognized as a crucial problem in digital and computational pathology. Current approaches commonly consider WSIs as a bag of cropped patches and process them via multiple instance learning due to the large number of patches, which cannot fully explore the relationship among patches; in other words, the global information cannot be fully incorporated into decision making. Herein, we propose an efficient and effective slide-level classification model, named as FALFormer, that can process a WSI as a whole so as to fully exploit the relationship among the entire patches and to improve the classification performance. FALFormer is built based upon Transformers and self-attention mechanism. To lessen the computational burden of the original self-attention mechanism and to process the entire patches together in a WSI, FALFormer employs Nyström self-attention which approximates the computation by using a smaller number of tokens or landmarks. For effective learning, FALFormer introduces feature-aware landmarks to enhance the representation power of the landmarks and the quality of the approximation. We systematically evaluate the performance of FALFormer using two public datasets, including CAMELYON16 and TCGA-BRCA. The experimental results demonstrate that FALFormer achieves superior performance on both datasets, outperforming the state-of-the-art methods for the slide-level classification. This suggests that FALFormer can facilitate an accurate and precise analysis of WSIs, potentially leading to improved diagnosis and prognosis on WSIs.
FALFormer: Feature-aware Landmarks self-attention for Whole-slide Image Classification
[ "Bui, Doanh C.", "Vuong, Trinh Thi Le", "Kwak, Jin Tae" ]
Conference
2407.07340
[ "https://github.com/quiil/falformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
347
null
https://papers.miccai.org/miccai-2024/paper/1336_paper.pdf
@InProceedings{ Lin_Beyond_MICCAI2024, author = { Lin, Xian and Xiang, Yangyang and Yu, Li and Yan, Zengqiang }, title = { { Beyond Adapting SAM: Towards End-to-End Ultrasound Image Segmentation via Auto Prompting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
End-to-end medical image segmentation is of great value for computer-aided diagnosis dominated by task-specific models, usually suffering from poor generalization. With recent breakthroughs brought by the segment anything model (SAM) for universal image segmentation, extensive efforts have been made to adapt SAM for medical imaging but still encounter two major issues: 1) severe performance degradation and limited generalization without proper adaptation, and 2) semi-automatic segmentation relying on accurate manual prompts for interaction. In this work, we propose SAMUS as a universal model tailored for ultrasound image segmentation and further enable it to work in an end-to-end manner denoted as AutoSAMUS. Specifically, in SAMUS, a parallel CNN branch is introduced to supplement local information through cross-branch attention, and a feature adapter and a position adapter are jointly used to adapt SAM from natural to ultrasound domains while reducing training complexity. AutoSAMUS is realized by introducing an auto prompt generator (APG) to replace the manual prompt encoder of SAMUS to automatically generate prompt embeddings. A comprehensive ultrasound dataset, comprising about 30k images and 69k masks and covering six object categories, is collected for verification. Extensive comparison experiments demonstrate the superiority of SAMUS and AutoSAMUS against the state-of-the-art task-specific and SAM-based foundation models. We believe the auto-prompted SAM-based model has the potential to become a new paradigm for end-to-end medical image segmentation and deserves more exploration. Code and data are available at https://github.com/xianlin7/SAMUS.
Beyond Adapting SAM: Towards End-to-End Ultrasound Image Segmentation via Auto Prompting
[ "Lin, Xian", "Xiang, Yangyang", "Yu, Li", "Yan, Zengqiang" ]
Conference
2309.06824
[ "https://github.com/xianlin7/SAMUS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
348
null
https://papers.miccai.org/miccai-2024/paper/1807_paper.pdf
@InProceedings{ Na_RadiomicsFillMammo_MICCAI2024, author = { Na, Inye and Kim, Jonghun and Ko, Eun Sook and Park, Hyunjin }, title = { { RadiomicsFill-Mammo: Synthetic Mammogram Mass Manipulation with Radiomics Features } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Motivated by the question, “Can we generate tumors with desired attributes?’’ this study leverages radiomics features to explore the feasibility of generating synthetic tumor images. Characterized by its low-dimensional yet biologically meaningful markers, radiomics bridges the gap between complex medical imaging data and actionable clinical insights. We present RadiomicsFill-Mammo, the first of the RadiomicsFill series, an innovative technique that generates realistic mammogram mass images mirroring specific radiomics attributes using masked images and opposite breast images, leveraging a recent stable diffusion model. This approach also allows for the incorporation of essential clinical variables, such as BI-RADS and breast density, alongside radiomics features as conditions for mass generation. Results indicate that RadiomicsFill-Mammo effectively generates diverse and realistic tumor images based on various radiomics conditions. Results also demonstrate a significant improvement in mass detection capabilities, leveraging RadiomicsFill-Mammo as a strategy to generate simulated samples. Furthermore, RadiomicsFill-Mammo not only advances medical imaging research but also opens new avenues for enhancing treatment planning and tumor simulation. Our code is available at https://github.com/nainye/RadiomicsFill.
RadiomicsFill-Mammo: Synthetic Mammogram Mass Manipulation with Radiomics Features
[ "Na, Inye", "Kim, Jonghun", "Ko, Eun Sook", "Park, Hyunjin" ]
Conference
2407.05683
[ "https://github.com/nainye/RadiomicsFill" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
349
null
https://papers.miccai.org/miccai-2024/paper/2872_paper.pdf
@InProceedings{ Kan_MedSynth_MICCAI2024, author = { Kanagavelu, Renuga and Walia, Madhav and Wang, Yuan and Fu, Huazhu and Wei, Qingsong and Liu, Yong and Goh, Rick Siow Mong }, title = { { MedSynth: Leveraging Generative Model for Healthcare Data Sharing } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Sharing medical datasets among healthcare organizations is essential for advancing AI-assisted disease diagnostics and enhancing patient care. Employing techniques like data de-identification and data synthesis in medical data sharing, however, comes with inherent drawbacks that may lead to privacy leakage. Therefore, there is a pressing need for mechanisms that can effectively conceal sensitive information, ensuring a secure environment for data sharing. Dataset Condensation (DC) emerges as a solution, creating a reduced-scale synthetic dataset from a larger original dataset while maintaining comparable training outcomes. This approach offers advantages in terms of privacy and communication efficiency in the context of medical data sharing. Despite these benefits, traditional condensation methods encounter challenges, particularly with high-resolution medical datasets. To address these challenges, we present MedSynth, a novel dataset condensation scheme designed to efficiently condense the knowledge within extensive medical datasets into a generative model. This facilitates the sharing of the generative model across hospitals without the need to disclose raw data. By combining an attention-based generator with a vision transformer (ViT), MedSynth creates a generative model capable of producing a concise set of representative synthetic medical images, encapsulating the features of the original dataset. This generative model can then be shared with hospitals to optimize various downstream model training tasks. Extensive experimental results across medical datasets demonstrate that MedSynth outperforms state-of-the-art methods. Moreover, MedSynth successfully defends against state-of-the-art Membership Inference Attacks (MIA), highlighting its significant potential in preserving the privacy of medical data.
MedSynth: Leveraging Generative Model for Healthcare Data Sharing
[ "Kanagavelu, Renuga", "Walia, Madhav", "Wang, Yuan", "Fu, Huazhu", "Wei, Qingsong", "Liu, Yong", "Goh, Rick Siow Mong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
350
null
https://papers.miccai.org/miccai-2024/paper/1207_paper.pdf
@InProceedings{ Din_HiA_MICCAI2024, author = { Ding, Xinpeng and Chu, Yongqiang and Pi, Renjie and Wang, Hualiang and Li, Xiaomeng }, title = { { HiA: Towards Chinese Multimodal LLMs for Comparative High-Resolution Joint Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Multimodal large language models (MLLMs) have been explored in the Chinese medical domain for comprehending complex healthcare. However, due to the flaws in training data and architecture design, current Chinese medical MLLMs suffer from several limitations: cultural biases from English machine translations, limited comparative ability from single image input and difficulty in identifying small lesions with low-resolution images. To address these problems, we first introduce a new instruction-following dataset, Chili-Joint (Chinese Interleaved Image-Text Dataset for Joint Diagnosis) collected from the hospital in mainland China, avoiding cultural biases and errors caused by machine translation. Besides one single image input, Chili-Joint also has multiple images obtained at various intervals during a patient’s treatment, thus facilitating an evaluation of the treatment’s outcomes. We further propose a novel HiA (High-resolution instruction-aware Adapter) to incorporate high-resolutioninstruction-aware visual features into LLMs to facilitate the current MLLMs to observe the small lesions as well as the comparative analysis. Extensive experiments on Chili-Joint demonstrate our HiA can be a plug-and-play method to improve the performance of current MLLMs for medical analysis. The code is available at https://github.com/xmed-lab/HiA.
HiA: Towards Chinese Multimodal LLMs for Comparative High-Resolution Joint Diagnosis
[ "Ding, Xinpeng", "Chu, Yongqiang", "Pi, Renjie", "Wang, Hualiang", "Li, Xiaomeng" ]
Conference
[ "https://github.com/xmed-lab/HiA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
351
null
https://papers.miccai.org/miccai-2024/paper/1494_paper.pdf
@InProceedings{ Wit_SimulationBased_MICCAI2024, author = { Wittmann, Bastian and Glandorf, Lukas and Paetzold, Johannes C. and Amiranashvili, Tamaz and Wälchli, Thomas and Razansky, Daniel and Menze, Bjoern }, title = { { Simulation-Based Segmentation of Blood Vessels in Cerebral 3D OCTA Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Segmentation of blood vessels in murine cerebral 3D OCTA images is foundational for in vivo quantitative analysis of the effects of neurovascular disorders, such as stroke or Alzheimer’s, on the vascular network. However, to accurately segment blood vessels with state-of-the-art deep learning methods, a vast amount of voxel-level annotations is required. Since cerebral 3D OCTA images are typically plagued by artifacts and generally have a low signal-to-noise ratio, acquiring manual annotations poses an especially cumbersome and time-consuming task. To alleviate the need for manual annotations, we propose utilizing synthetic data to supervise segmentation algorithms. To this end, we extract patches from vessel graphs and transform them into synthetic cerebral 3D OCTA images paired with their matching ground truth labels by simulating the most dominant 3D OCTA artifacts. In extensive experiments, we demonstrate that our approach achieves competitive results, enabling annotation-free blood vessel segmentation in cerebral 3D OCTA images.
Simulation-Based Segmentation of Blood Vessels in Cerebral 3D OCTA Images
[ "Wittmann, Bastian", "Glandorf, Lukas", "Paetzold, Johannes C.", "Amiranashvili, Tamaz", "Wälchli, Thomas", "Razansky, Daniel", "Menze, Bjoern" ]
Conference
2403.07116
[ "https://github.com/bwittmann/syn-cerebral-octa-seg" ]
https://huggingface.co/papers/2403.07116
0
0
0
7
[]
[ "bwittmann/syn-cerebral-octa-seg" ]
[]
[]
[ "bwittmann/syn-cerebral-octa-seg" ]
[]
1
Poster
352
null
https://papers.miccai.org/miccai-2024/paper/3485_paper.pdf
@InProceedings{ Das_Confidenceguided_MICCAI2024, author = { Das, Abhijit and Gorade, Vandan and Kumar, Komal and Chakraborty, Snehashis and Mahapatra, Dwarikanath and Roy, Sudipta }, title = { { Confidence-guided Semi-supervised Learning for Generalized Lesion Localization in X-ray Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
In recent years pseudo label (PL) based semi-supervised (SS) methods have been proposed for disease localization in medical images for tasks with limited labeled data. However these models are not curated for chest x-rays containing anomalies of different shapes and sizes. As a result, existing methods suffer from biased attentiveness towards minor class and PL inconsistency. Soft labeling based methods filters out PLs with higher uncertainty but leads to loss of fine-grained features of minor articulates, resulting in sparse prediction. To address these challenges we propose AnoMed, an uncertainty aware SS framework with novel scale invariant bottleneck (SIB) and confidence guided pseudo-label optimizer (PLO). SIB leverages base feature (Fb) obtained from any encoder to capture multi-granular anatomical structures and underlying representations. On top of that, PLO refines hesitant PLs and guides them separately for unsupervised loss, reducing inconsistency. Our extensive experiments on cardiac datasets and out-of-distribution (OOD) fine-tuning demonstrate that AnoMed outperforms other state-of-the-art (SOTA) methods like Efficient Teacher and Mean Teacher with improvement of 4.9 and 5.9 in AP50:95 on VinDr-CXR data. Code for our architecture is available at https://github.com/aj-das-research/AnoMed.
Confidence-guided Semi-supervised Learning for Generalized Lesion Localization in X-ray Images
[ "Das, Abhijit", "Gorade, Vandan", "Kumar, Komal", "Chakraborty, Snehashis", "Mahapatra, Dwarikanath", "Roy, Sudipta" ]
Conference
[ "https://github.com/aj-das-research/AnoMed" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
353
null
https://papers.miccai.org/miccai-2024/paper/1015_paper.pdf
@InProceedings{ Fu_3DGRCAR_MICCAI2024, author = { Fu, Xueming and Li, Yingtai and Tang, Fenghe and Li, Jun and Zhao, Mingyue and Teng, Gao-Jun and Zhou, S. Kevin }, title = { { 3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray views with a 3D Gaussians representation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Reconstructing 3D coronary arteries is important for coronary artery disease diagnosis, treatment planning and operation navigation. Traditional techniques often require many projections, while reconstruction from sparse-view X-ray projections is a potential way of reducing radiation dose. However, the extreme sparsity of coronary artery volume and ultra-limited number of projections pose significant challenges for efficient and accurate 3D reconstruction. We propose 3DGR-CAR, a 3D Gaussian Representation for Coronary Artery Reconstruction from ultra-sparse X-ray projections. We leverage 3D Gaussian representation to avoid the inefficiency caused by the extreme sparsity of coronary artery data, and propose a Gaussian center predictor to overcome the noisy gaussian initialization from ultra-sparse view projections. The proposed scheme enables fast and accurate 3D coronary arteries reconstruction with only 2 views. Experimental results on two datasets indicate that the proposed approach significantly outperforms other methods in terms of voxel accuracy and visual quality of coronary artery.
3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray views with a 3D Gaussians representation
[ "Fu, Xueming", "Li, Yingtai", "Tang, Fenghe", "Li, Jun", "Zhao, Mingyue", "Teng, Gao-Jun", "Zhou, S. Kevin" ]
Conference
2410.00404
[ "https://github.com/windrise/3DGR-CAR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
354
null
https://papers.miccai.org/miccai-2024/paper/3387_paper.pdf
@InProceedings{ Hol_Glioblastoma_MICCAI2024, author = { Holden Helland, Ragnhild and Bouget, David and Eijgelaar, Roelant S. and De Witt Hamer, Philip C. and Barkhof, Frederik and Solheim, Ole and Reinertsen, Ingerid }, title = { { Glioblastoma segmentation from early post-operative MRI: challenges and clinical impact } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Post-surgical evaluation and quantification of residual tumor tissue from magnetic resonance images (MRI) is a crucial step for treatment planning and follow-up in glioblastoma care. Segmentation of enhancing residual tumor tissue from early post-operative MRI is particularly challenging due to small and fragmented lesions, post-operative bleeding, and noise in the resection cavity. Although a lot of progress has been made on the adjacent task of pre-operative glioblastoma segmentation, more targeted methods are needed for addressing the specific challenges and detecting small lesions. In this study, a state-of-the-art architecture for pre-operative segmentation was used, trained on a large in-house multi-center dataset for early post-operative segmentation. Various pre-processing, data sampling techniques, and architecture variants were explored for improving the detection of small lesions. The models were evaluated on a dataset annotated by 8 novice and expert human raters, and the performance compared against the human inter-rater variability. Trained models’ performance were shown to be on par with the performance of human expert raters. As such, automatic segmentation models have the potential to be a valuable tool in a clinical setting as an accurate and time-saving alternative, compared to the current standard manual method for residual tumor measurement after surgery.
Glioblastoma segmentation from early post-operative MRI: challenges and clinical impact
[ "Holden Helland, Ragnhild", "Bouget, David", "Eijgelaar, Roelant S.", "De Witt Hamer, Philip C.", "Barkhof, Frederik", "Solheim, Ole", "Reinertsen, Ingerid" ]
Conference
[ "https://github.com/dbouget/validation_metrics_computation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
355
null
https://papers.miccai.org/miccai-2024/paper/0158_paper.pdf
@InProceedings{ Rey_EchoNetSynthetic_MICCAI2024, author = { Reynaud, Hadrien and Meng, Qingjie and Dombrowski, Mischa and Ghosh, Arijit and Day, Thomas and Gomez, Alberto and Leeson, Paul and Kainz, Bernhard }, title = { { EchoNet-Synthetic: Privacy-preserving Video Generation for Safe Medical Data Sharing } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
To make medical datasets accessible without sharing sensitive patient information, we introduce a novel end-to-end approach for generative de-identification of dynamic medical imaging data. Until now, generative methods have faced constraints in terms of fidelity, spatio-temporal coherence, and the length of generation, failing to capture the complete details of dataset distributions. We present a model designed to produce high-fidelity, long and complete data samples with near-real-time efficiency and explore our approach on a challenging task: generating echocardiogram videos. We develop our generation method based on diffusion models and introduce a protocol for medical video dataset anonymization. As an exemplar, we present EchoNet-Synthetic, a fully synthetic, privacy-compliant echocardiogram dataset with paired ejection fraction labels. As part of our de-identification protocol, we evaluate the quality of the generated dataset and propose to use clinical downstream tasks as a measurement on top of widely used but potentially biased image quality metrics. Experimental outcomes demonstrate that EchoNet-Synthetic achieves comparable dataset fidelity to the actual dataset, effectively supporting the ejection fraction regression task. Code, weights and dataset are available at https://github.com/HReynaud/EchoNet-Synthetic.
EchoNet-Synthetic: Privacy-preserving Video Generation for Safe Medical Data Sharing
[ "Reynaud, Hadrien", "Meng, Qingjie", "Dombrowski, Mischa", "Ghosh, Arijit", "Day, Thomas", "Gomez, Alberto", "Leeson, Paul", "Kainz, Bernhard" ]
Conference
2406.00808
[ "https://github.com/HReynaud/EchoNet-Synthetic" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
356
null
https://papers.miccai.org/miccai-2024/paper/3653_paper.pdf
@InProceedings{ Hu_ConsecutiveContrastive_MICCAI2024, author = { Hu, Dan and Han, Kangfu and Cheng, Jiale and Li, Gang }, title = { { Consecutive-Contrastive Spherical U-net: Enhancing Reliability of Individualized Functional Brain Parcellation for Short-duration fMRI Scans } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Individualized brain parcellations derived from functional MRI (fMRI) are essential for discerning unique functional patterns of individuals, facilitat-ing personalized diagnoses and treatments. Unfortunately, as fMRI signals are inherently noisy, establishing reliable individualized parcellations typ-ically necessitates long-duration fMRI scan (> 25 min), posing a major chal-lenge and resulting in the exclusion of numerous short-duration fMRI scans from individualized studies. To address this issue, we develop a novel Consecutive-Contrastive Spherical U-net (CC-SUnet) to enable the predic-tion of reliable individualized brain parcellation using short-duration fMRI data, greatly expanding its practical applicability. Specifically, 1) the wide-ly used functional diffusion map (DM), obtained from functional connec-tivity, is carefully selected as the predictive feature, for its advantage in tracing the transitions between regions while reducing noise. To ensure a robust depiction of brain network, we propose a dual-task model to predict DM and cortical parcellation simultaneously, fully utilizing their reciprocal relationship. 2) By constructing a stepwise dataset to capture the gradual changes of DM over increasing scan durations, a consecutive prediction framework is designed to realize the prediction from short-to-long gradual-ly. 3) A stepwise-denoising-prediction module is further proposed. The noise representations are separated and replaced by the latent representa-tions of a group-level diffusion map, realizing informative guidance and de-noising concurrently. 4) Additionally, an N-pair contrastive loss is intro-duced to strengthen the discriminability of the individualized parcella-tions. Extensive experimental results demonstrated the superiority of our proposed CC-SUnet in enhancing the reliability of the individualized par-cellation with short-duration fMRI data, thereby significantly boosting their utility in individualized studies.
Consecutive-Contrastive Spherical U-net: Enhancing Reliability of Individualized Functional Brain Parcellation for Short-duration fMRI Scans
[ "Hu, Dan", "Han, Kangfu", "Cheng, Jiale", "Li, Gang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
357
null
https://papers.miccai.org/miccai-2024/paper/1619_paper.pdf
@InProceedings{ Xu_Multiscale_MICCAI2024, author = { Xu, Yanyu and Xia, Yingzhi and Fu, Huazhu and Goh, Rick Siow Mong and Liu, Yong and Xu, Xinxing }, title = { { Multi-scale Region-aware Implicit Neural Network for Medical Images Matting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Medical image segmentation is a critical task in computer-assisted diagnosis and disease monitoring, where labeling complex and ambiguous targets poses a significant challenge. Recently, the alpha matte has been investigated as a soft mask in medical scenes, using continuous values to quantify and distinguish uncertain lesions with high diagnostic values. In this work, we propose a multi-scale regions-aware implicit function network for the medical matting problem. Firstly, we design an regions-aware implicit neural function to interpolate over larger and more flexible regions, preserving important input details. Further, the method employs multi-scale feature fusion to efficiently and precisely aggregate features from different levels. Experimental results on public medical matting datasets demonstrate the effectiveness of our proposed approach, and we release the codes and models in https://github.com/xuyanyu-shh/MedicalMattingMLP.
Multi-scale Region-aware Implicit Neural Network for Medical Images Matting
[ "Xu, Yanyu", "Xia, Yingzhi", "Fu, Huazhu", "Goh, Rick Siow Mong", "Liu, Yong", "Xu, Xinxing" ]
Conference
[ "https://github.com/xuyanyu-shh/MedicalMattingMLP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
358
null
https://papers.miccai.org/miccai-2024/paper/1328_paper.pdf
@InProceedings{ Lai_EchoMEN_MICCAI2024, author = { Lai, Song and Zhao, Mingyang and Zhao, Zhe and Chang, Shi and Yuan, Xiaohua and Liu, Hongbin and Zhang, Qingfu and Meng, Gaofeng }, title = { { EchoMEN: Combating Data Imbalance in Ejection Fraction Regression via Multi-Expert Network } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Ejection Fraction (EF) regression faces a critical challenge due to severe data imbalance since samples in the normal EF range significantly outnumber those in the abnormal range. This imbalance results in a bias in existing EF regression methods towards the normal population, undermining health equity. Furthermore, current imbalanced regression methods struggle with the head-tail performance trade-off, leading to increased prediction errors for the normal population. In this paper, we introduce EchoMEN, a multi-expert model designed to improve EF regression with balanced performance. EchoMEN adopts a two-stage decoupled training strategy. The first stage proposes a Label-Distance Weighted Supervised Contrastive Loss to enhance representation learning. This loss considers the label relationship among negative sample pairs, which encourages samples further apart in label space to be further apart in feature space. The second stage trains multiple regression experts independently with variably re-weighted settings, focusing on different parts of the target region. Their predictions are then combined using a weighted method to learn an unbiased ensemble regressor. Extensive experiments on the EchoNet-Dynamic dataset demonstrate that EchoMEN outperforms state-of-the-art algorithms and achieves well-balanced performance throughout all heart failure categories. Code: https://github.com/laisong-22004009/EchoMEN.
EchoMEN: Combating Data Imbalance in Ejection Fraction Regression via Multi-Expert Network
[ "Lai, Song", "Zhao, Mingyang", "Zhao, Zhe", "Chang, Shi", "Yuan, Xiaohua", "Liu, Hongbin", "Zhang, Qingfu", "Meng, Gaofeng" ]
Conference
[ "https://github.com/laisong-22004009/EchoMEN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
359
null
https://papers.miccai.org/miccai-2024/paper/2991_paper.pdf
@InProceedings{ Yan_SCMIL_MICCAI2024, author = { Yang, Zekang and Liu, Hong and Wang, Xiangdong }, title = { { SCMIL: Sparse Context-aware Multiple Instance Learning for Predicting Cancer Survival Probability Distribution in Whole Slide Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Cancer survival prediction is a challenging task that involves analyzing of the tumor microenvironment within Whole Slide Image (WSI). Previous methods cannot effectively capture the intricate interaction features among instances within the local area of WSI. Moreover, existing methods for cancer survival prediction based on WSI often fail to provide better clinically meaningful predictions. To overcome these challenges, we propose a Sparse Context-aware Multiple Instance Learning (SCMIL) framework for predicting cancer survival probability distributions. SCMIL innovatively segments patches into various clusters based on their morphological features and spatial location information, subsequently leveraging sparse self-attention to discern the relationships between these patches with a context-aware perspective. Considering many patches are irrelevant to the task, we introduce a learnable patch filtering module called SoftFilter, which ensures that only interactions between task-relevant patches are considered. To enhance the clinical relevance of our prediction, we propose a register-based mixture density network to forecast the survival probability distribution for individual patients. We evaluate SCMIL on two public WSI datasets from the The Cancer Genome Atlas (TCGA) specifically focusing on lung adenocarcinom (LUAD) and kidney renal clear cell carcinoma (KIRC). Our experimental results indicate that SCMIL outperforms current state-of-the-art methods for survival prediction, offering more clinically meaningful and interpretable outcomes. Our code is accessible at https://github.com/yang-ze-kang/SCMIL.
SCMIL: Sparse Context-aware Multiple Instance Learning for Predicting Cancer Survival Probability Distribution in Whole Slide Images
[ "Yang, Zekang", "Liu, Hong", "Wang, Xiangdong" ]
Conference
2407.00664
[ "https://github.com/yang-ze-kang/SCMIL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
360
null
https://papers.miccai.org/miccai-2024/paper/0670_paper.pdf
@InProceedings{ Yan_CoarseGrained_MICCAI2024, author = { Yan, Yige and Cheng, Jun and Yang, Xulei and Gu, Zaiwang and Leng, Shuang and Tan, Ru San and Zhong, Liang and Rajapakse, Jagath C. }, title = { { Coarse-Grained Mask Regularization for Microvascular Obstruction Identification from non-contrast Cardiac Magnetic Resonance } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Identification of microvascular obstruction (MVO) in acute myocardial infarction patients is critical for prognosis and has a direct link to mortality risk. Current approaches using late gadolinium enhancement (LGE) for contrast-enhanced cardiac magnetic resonance (CMR) pose risks to the kidney and may not be applicable to many patients. This highlights the need to explore alternative non-contrast imaging methods, such as cine CMR, for MVO identification. However, the scarcity of datasets and the challenges in annotation make the MVO identification in cine CMR challenging and remain largely under-explored. For this purpose, we propose a non-contrast MVO identification framework in cine CMR with a novel coarse-grained mask regularization strategy to better utilize information from LGE annotations in training. We train and test our model on a dataset comprising 680 cases. Our model demonstrates superior performance over competing methods in cine CMR-based MVO identification, proving its feasibility and presenting a novel and patient-friendly approach to the field. The code is available at https://github.com/code-koukai/MVO-identification.
Coarse-Grained Mask Regularization for Microvascular Obstruction Identification from non-contrast Cardiac Magnetic Resonance
[ "Yan, Yige", "Cheng, Jun", "Yang, Xulei", "Gu, Zaiwang", "Leng, Shuang", "Tan, Ru San", "Zhong, Liang", "Rajapakse, Jagath C." ]
Conference
[ "https://github.com/code-koukai/MVO-identification" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
361
null
https://papers.miccai.org/miccai-2024/paper/1538_paper.pdf
@InProceedings{ Gao_MBANet_MICCAI2024, author = { Gao, Yifan and Xia, Wei and Wang, Wenkui and Gao, Xin }, title = { { MBA-Net: SAM-driven Bidirectional Aggregation Network for Ovarian Tumor Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Accurate segmentation of ovarian tumors from medical images is crucial for early diagnosis, treatment planning, and patient management. However, the diverse morphological characteristics and heterogeneous appearances of ovarian tumors pose significant challenges to automated segmentation methods. In this paper, we propose MBA-Net, a novel architecture that integrates the powerful segmentation capabilities of the Segment Anything Model (SAM) with domain-specific knowledge for accurate and robust ovarian tumor segmentation. MBA-Net employs a hybrid encoder architecture, where the encoder consists of a prior branch, which inherits the SAM encoder to capture robust segmentation priors, and a domain branch, specifically designed to extract domain-specific features. The bidirectional flow of information between the two branches is facilitated by the robust feature injection network (RFIN) and the domain knowledge integration network (DKIN), enabling MBA-Net to leverage the complementary strengths of both branches. We extensively evaluate MBA-Net on the public multi-modality ovarian tumor ultrasound dataset and the in-house multi-site ovarian tumor MRI dataset. Our proposed method consistently outperforms state-of-the-art segmentation approaches. Moreover, MBA-Net demonstrates superior generalization capability across different imaging modalities and clinical sites.
MBA-Net: SAM-driven Bidirectional Aggregation Network for Ovarian Tumor Segmentation
[ "Gao, Yifan", "Xia, Wei", "Wang, Wenkui", "Gao, Xin" ]
Conference
2407.05984
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
362
null
https://papers.miccai.org/miccai-2024/paper/1354_paper.pdf
@InProceedings{ Fan_PathMamba_MICCAI2024, author = { Fan, Jiansong and Lv, Tianxu and Di, Yicheng and Li, Lihua and Pan, Xiang }, title = { { PathMamba: Weakly Supervised State Space Model for Multi-class Segmentation of Pathology Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Accurate segmentation of pathology images plays a crucial role in digital pathology workflow. Fully supervised models have achieved excellent performance through dense pixel-level annotation. However, annotation on gigapixel pathology images is extremely expensive and time-consuming. Recently, the state space model with efficient hardware-aware design, known as Mamba, has achieved impressive results. In this paper, we propose a weakly supervised state space model (PathMamba) for multi-class segmentation of pathology images using only image-level labels. Our method integrates the standard features of both pixel-level and patch-level pathology images and can generate more regionally consistent segmentation results. Specifically, we first extract pixel-level feature maps based on Multi-Instance Multi-Label Learning by treating pixels as instances, which are subsequently injected into our designed Contrastive Mamba Block. The Contrastive Mamba Block adopts a state space model and integrates the concept of contrastive learning to extract non-causal dual-granularity features in pathological images. In addition, we suggest a Deep Contrast Supervised Loss to fully utilize the limited annotated information in weakly supervised methods. Our approach facilitates a comprehensive feature learning process and captures complex details and broader global contextual semantics in pathology images. Experiments on two public pathology image datasets show that the proposed method performs better than state-of-the-art weakly supervised methods. The code is available at https://github.com/hemo0826/PathMamba.
PathMamba: Weakly Supervised State Space Model for Multi-class Segmentation of Pathology Images
[ "Fan, Jiansong", "Lv, Tianxu", "Di, Yicheng", "Li, Lihua", "Pan, Xiang" ]
Conference
[ "https://github.com/hemo0826/PathMamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
363
null
https://papers.miccai.org/miccai-2024/paper/3214_paper.pdf
@InProceedings{ Wan_Doubletier_MICCAI2024, author = { Wang, Mingkang and Wang, Tong and Cong, Fengyu and Lu, Cheng and Xu, Hongming }, title = { { Double-tier Attention based Multi-label Learning Network for Predicting Biomarkers from Whole Slide Images of Breast Cancer } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Hematoxylin and eosin (H&E) staining offers the advantages of low cost and high stability, effectively revealing the morphological structure of the nucleus and tissue. Predicting the expression levels of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) from H&E stained slides is crucial for reducing the detection cost of the immunohistochemistry (IHC) method and tailoring the treatment of breast cancer patients. However, this task faces significant challenges due to the scarcity of large-scale and well-annotated datasets. In this paper, we propose a double-tier attention based multi-label learning network, termed as DAMLN, for simultaneous prediction of ER, PR, and HER2 from H&E stained WSIs. Our DAMLN considers slides and their tissue tiles as bags and instances under a multiple instance learning (MIL) setting. First, the instances are encoded via a pretrained CTransPath model and randomly divided into a set of pseudo bags. Pseudo-bag guided learning via cascading the multi-head self-attention (MSA) and linear MSA blocks is then conducted to generate pseudo-bag level representations. Finally, attention-pooling is applied to class tokens of pseudo bags to generate multiple biomarker predictions. Our experiments conducted on large-scale datasets with over 3000 patients demonstrate great improvements over comparative MIL models.
Double-tier Attention based Multi-label Learning Network for Predicting Biomarkers from Whole Slide Images of Breast Cancer
[ "Wang, Mingkang", "Wang, Tong", "Cong, Fengyu", "Lu, Cheng", "Xu, Hongming" ]
Conference
[ "https://github.com/PerrySkywalker/DAMLN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
364
null
https://papers.miccai.org/miccai-2024/paper/1977_paper.pdf
@InProceedings{ Han_On_MICCAI2024, author = { Han, Tianyu and Nebelung, Sven and Khader, Firas and Kather, Jakob Nikolas and Truhn, Daniel }, title = { { On Instabilities of Unsupervised Denoising Diffusion Models in Magnetic Resonance Imaging Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Denoising diffusion models offer a promising approach to accelerating magnetic resonance imaging (MRI) and producing diagnostic-level images in an unsupervised manner. However, our study demonstrates that even tiny worst-case potential perturbations transferred from a surrogate model can cause these models to generate fake tissue structures that may mislead clinicians. The transferability of such worst-case perturbations indicates that the robustness of image reconstruction may be compromised due to MR system imperfections or other sources of noise. Moreover, at larger perturbation strengths, diffusion models exhibit Gaussian noise-like artifacts that are distinct from those observed in supervised models and are more challenging to detect. Our results highlight the vulnerability of current state-of-the-art diffusion-based reconstruction models to possible worst-case perturbations and underscore the need for further research to improve their robustness and reliability in clinical settings.
On Instabilities of Unsupervised Denoising Diffusion Models in Magnetic Resonance Imaging Reconstruction
[ "Han, Tianyu", "Nebelung, Sven", "Khader, Firas", "Kather, Jakob Nikolas", "Truhn, Daniel" ]
Conference
2406.16983
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
365
null
https://papers.miccai.org/miccai-2024/paper/3117_paper.pdf
@InProceedings{ Han_BAPLe_MICCAI2024, author = { Hanif, Asif and Shamshad, Fahad and Awais, Muhammad and Naseer, Muzammal and Shahbaz Khan, Fahad and Nandakumar, Karthik and Khan, Salman and Anwer, Rao Muhammad }, title = { { BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Medical foundation models are gaining prominence in the medical community for their ability to derive general representations from extensive collections of medical image-text pairs. Recent research indicates that these models are susceptible to backdoor attacks, which allow them to classify clean images accurately but fail when specific triggers are introduced. However, traditional backdoor attacks necessitate a considerable amount of additional data to maliciously pre-train a model. This requirement is often impractical in medical imaging applications due to the usual scarcity of data. Inspired by the latest developments in learnable prompts, this work introduces a method to embed a backdoor into the medical foundation model during the prompt learning phase. By incorporating learnable prompts within the text encoder and introducing imperceptible learnable noise trigger to the input images, we exploit the full capabilities of the medical foundation models (Med-FM). Our method requires only a minimal subset of data to adjust the text prompts for downstream tasks, enabling the creation of an effective backdoor attack. Through extensive experiments with four medical foundation models, each pre-trained on different modalities and evaluated across six downstream datasets, we demonstrate the efficacy of our approach. Code is available at https://github.com/asif-hanif/baple
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning
[ "Hanif, Asif", "Shamshad, Fahad", "Awais, Muhammad", "Naseer, Muzammal", "Shahbaz Khan, Fahad", "Nandakumar, Karthik", "Khan, Salman", "Anwer, Rao Muhammad" ]
Conference
2408.07440
[ "https://github.com/asif-hanif/baple" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
366
null
https://papers.miccai.org/miccai-2024/paper/2456_paper.pdf
@InProceedings{ She_FastSAM3D_MICCAI2024, author = { Shen, Yiqing and Li, Jingxing and Shao, Xinyuan and Inigo Romillo, Blanca and Jindal, Ankush and Dreizin, David and Unberath, Mathias }, title = { { FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128×128×128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://anonymous.4open.science/r/FastSAM3D-v1
FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images
[ "Shen, Yiqing", "Li, Jingxing", "Shao, Xinyuan", "Inigo Romillo, Blanca", "Jindal, Ankush", "Dreizin, David", "Unberath, Mathias" ]
Conference
2403.09827
[ "https://github.com/arcadelab/FastSAM3D" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
367
null
https://papers.miccai.org/miccai-2024/paper/3922_paper.pdf
@InProceedings{ Tan_RDDNet_MICCAI2024, author = { Tang, Yilin and Zhang, Min and Feng, Jun }, title = { { RDD-Net: Randomized Joint Data-Feature Augmentation and Deep-Shallow Feature Fusion Networks for Automated Diagnosis of Glaucoma } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Glaucoma is an irreversible eye disease that has become the leading cause of human blindness worldwide. In recent years, deep learning shows great potential for computer-aided diagnosis in clinics. However, the diversity in medical image quality and acquisition devices leads to distribution shifts that compromise the generalization performance of deep learning methods. To address this issue, many methods relied on deep feature learning combined with the employment of data-level augmentation or feature-level augmentation, respectively. these methods suffer from the limited search space of feature styles. Previous research indicated that introducing a diverse set of augmentations and domain randomization during training can expand the search space of feature styles. In this paper, we propose a Randomized joint Data-feature augmentation and Deep-shallow feature fusion method for automated diagnosis of glaucoma (RDD-Net). It consists of three main components: Data/Feature-level Augmentation (DFA), Explicit/Implicit augmentation (EI), and Deep-Shallow feature fusion (DS). DFA randomly selects data/feature-level augmentation statistics from a uniform distribution. EI involves both explicit augmentation, perturbing the style of the source domain data, and implicit augmentation, utilizing moments information. The randomized selection of different augmentation strategies broadens the diversity of feature styles. DS integrates deep-shallow features within the backbone. Extensive experiments have shown that RDD-Net achieves the SOTA effectiveness and generalization ability. The code is available at https://github.com/TangYilin610/RDD-Net.
RDD-Net: Randomized Joint Data-Feature Augmentation and Deep-Shallow Feature Fusion Networks for Automated Diagnosis of Glaucoma
[ "Tang, Yilin", "Zhang, Min", "Feng, Jun" ]
Conference
[ "https://github.com/TangYilin610/RDD-Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
368
null
https://papers.miccai.org/miccai-2024/paper/1503_paper.pdf
@InProceedings{ Suv_Multimodal_MICCAI2024, author = { Suvon, Mohammod N. I. and Tripathi, Prasun C. and Fan, Wenrui and Zhou, Shuo and Liu, Xianyuan and Alabed, Samer and Osmani, Venet and Swift, Andrew J. and Chen, Chen and Lu, Haiping }, title = { { Multimodal Variational Autoencoder for Low-cost Cardiac Hemodynamics Instability Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Recent advancements in non-invasive detection of Cardiac Hemodynamic Instability (CHDI) primarily focus on applying machine learning techniques to a single data modality, e.g. cardiac magnetic resonance imaging (MRI). Despite their potential, these approaches often fall short especially when the size of labeled patient data is limited, a common challenge in the medical domain. Furthermore, only a few studies have explored multimodal methods to study CHDI, which mostly rely on costly modalities such as cardiac MRI and echocardiogram. In response to these limitations, we propose a novel multimodal variational autoencoder (CardioVAE_X, G) to integrate low-cost chest X-ray (CXR) and electrocardiogram (ECG) modalities with pre-training on a large unlabeled dataset. Specifically, CardioVAE_X, G introduces a novel tri-stream pre-training strategy to learn both shared and modality-specific features, thus enabling fine-tuning with both unimodal and multimodal datasets. We pre-train CardioVAE_X, G on a large, unlabeled dataset of 50,982 subjects from a subset of MIMIC database and then fine-tune the pre-trained model on a labeled dataset of 795 subjects from the ASPIRE registry. Comprehensive evaluations against existing methods show that CardioVAE_X, G offers promising performance (AUROC = 0.79 and Accuracy = 0.77), representing a significant step forward in non-invasive prediction of CHDI. Our model also excels in producing fine interpretations of predictions directly associated with clinical features, thereby supporting clinical decision-making.
Multimodal Variational Autoencoder for Low-cost Cardiac Hemodynamics Instability Detection
[ "Suvon, Mohammod N. I.", "Tripathi, Prasun C.", "Fan, Wenrui", "Zhou, Shuo", "Liu, Xianyuan", "Alabed, Samer", "Osmani, Venet", "Swift, Andrew J.", "Chen, Chen", "Lu, Haiping" ]
Conference
2403.13658
[ "https://github.com/Shef-AIRE/AI4Cardiothoracic-CardioVAE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
369
null
https://papers.miccai.org/miccai-2024/paper/2097_paper.pdf
@InProceedings{ Zha_MAdapter_MICCAI2024, author = { Zhang, Xu and Ni, Bo and Yang, Yang and Zhang, Lefei }, title = { { MAdapter: A Better Interaction between Image and Language for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Conventional medical image segmentation methods are only based on images, implying a requirement for adequate high-quality labeled images. Text-guided segmentation methods have been widely regarded as a solution to break the performance bottleneck. In this study, we introduce a bidirectional Medical Adaptor (MAdapter) where visual and linguistic features extracted from pre-trained dual encoders undergo interactive fusion. Additionally, a specialized decoder is designed to further align the fusion representation and global textual representation. Besides, We extend the endoscopic polyp datasets with clinical-oriented text annotations, following the guidance of medical professionals. Extensive experiments conducted on both the extended endoscopic polyp dataset and additional lung infection datasets demonstrate the superiority of our method.
MAdapter: A Better Interaction between Image and Language for Medical Image Segmentation
[ "Zhang, Xu", "Ni, Bo", "Yang, Yang", "Zhang, Lefei" ]
Conference
[ "https://github.com/XShadow22/MAdapter" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
370
null
https://papers.miccai.org/miccai-2024/paper/0551_paper.pdf
@InProceedings{ Aze_Deep_MICCAI2024, author = { Azevedo, Caio and Santra, Sanchayan and Kumawat, Sudhakar and Nagahara, Hajime and Morooka, Ken'ichi }, title = { { Deep Volume Reconstruction from Multi-focus Microscopic Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Reconstructing 3D volumes from optical microscopic images is useful in important areas such as cellular analysis, cancer research, and drug development. However, existing techniques either require specialized hardware or extensive sample preprocessing. Recently, Yamaguchi et al proposed to solve this problem by just using a single stack of optical microscopic images with different focus settings and reconstructing a voxel-based representation of the observation using the classical iterative optimization method. Inspired by this result, this work aims to explore this method further using new state-of-the-art optimization techniques such as Deep Image Prior (DIP). Our analysis showcases the superiority of this approach over Yamaguchi et al in reconstruction quality, hard metrics, and robustness to noise on the synthetic data. Finally, we also demonstrate the effectiveness of our approach on real data, producing excellent reconstruction quality. Code available at: https://github.com/caiocj1/multifocus-3d-reconstruction.
Deep Volume Reconstruction from Multi-focus Microscopic Images
[ "Azevedo, Caio", "Santra, Sanchayan", "Kumawat, Sudhakar", "Nagahara, Hajime", "Morooka, Ken'ichi" ]
Conference
[ "https://github.com/caiocj1/multifocus-3d-reconstruction" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
371
null
https://papers.miccai.org/miccai-2024/paper/1838_paper.pdf
@InProceedings{ Gun_Online_MICCAI2024, author = { Gunnarsson, Niklas and Sjölund, Jens and Kimstrand, Peter and Schön, Thomas B. }, title = { { Online learning in motion modeling for intra-interventional image sequences } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Image monitoring and guidance during medical examinations can aid both diagnosis and treatment. However, the sampling frequency is often too low, which creates a need to estimate the missing images. We present a probabilistic motion model for sequential medical images, with the ability to both estimate motion between acquired images and forecast the motion ahead of time. The core is a low-dimensional temporal process based on a linear Gaussian state-space model with analytically tractable solutions for forecasting, simulation, and imputation of missing samples. The results, from two experiments on publicly available cardiac datasets, show reliable motion estimates and an improved forecasting performance using patient-specific adaptation by online learning.
Online learning in motion modeling for intra-interventional image sequences
[ "Gunnarsson, Niklas", "Sjölund, Jens", "Kimstrand, Peter", "Schön, Thomas B." ]
Conference
2410.11491
[ "https://github.com/ngunnar/2D_motion_model" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
372
null
https://papers.miccai.org/miccai-2024/paper/2012_paper.pdf
@InProceedings{ Rei_Unsupervised_MICCAI2024, author = { Reisenbüchler, Daniel and Luttner, Lucas and Schaadt, Nadine S. and Feuerhake, Friedrich and Merhof, Dorit }, title = { { Unsupervised Latent Stain Adaptation for Computational Pathology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
In computational pathology, deep learning (DL) models for tasks such as segmentation or tissue classification are known to suffer from domain shifts due to different staining techniques. Stain adaptation aims to reduce the generalization error between different stains by training a model on source stains that generalizes to target stains. Despite the abundance of target stain data, a key challenge is the lack of annotations. To address this, we propose a joint training between artificially labeled and unlabeled data including all available stained images called Unsupervised Latent Stain Adaptation (ULSA). Our method uses stain translation to enrich labeled source images with synthetic target images in order to increase the supervised signals. Moreover, we leverage unlabeled target stain images using stain-invariant feature consistency learning. With ULSA we present a semi-supervised strategy for efficient stain adaptation without access to annotated target stain data. Remarkably, ULSA is task agnostic in patch-level analysis for whole slide images (WSIs). Through extensive evaluation on external datasets, we demonstrate that ULSA achieves state-of-the-art (SOTA) performance in kidney tissue segmentation and breast cancer classification across a spectrum of staining variations. Our findings suggest that ULSA is an important framework for stain adaptation in computational pathology.
Unsupervised Latent Stain Adaptation for Computational Pathology
[ "Reisenbüchler, Daniel", "Luttner, Lucas", "Schaadt, Nadine S.", "Feuerhake, Friedrich", "Merhof, Dorit" ]
Conference
2406.19081
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
373
null
https://papers.miccai.org/miccai-2024/paper/0383_paper.pdf
@InProceedings{ Zha_Diseaseinformed_MICCAI2024, author = { Zhang, Jiajin and Wang, Ge and Kalra, Mannudeep K. and Yan, Pingkun }, title = { { Disease-informed Adaptation of Vision-Language Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
In medical image analysis, the expertise scarcity and the high cost of data annotation limits the development of large artificial intelligence models. This paper investigates the potential of transfer learning with pre-trained vision-language models (VLMs) in this domain. Currently, VLMs still struggle to transfer to the underrepresented diseases with minimal presence and new diseases entirely absent from the pre-training dataset. We argue that effective adaptation of VLMs hinges on the nuanced representation learning of disease concepts. By capitalizing on the joint visual-linguistic capabilities of VLMs, we introduce disease-informed contextual prompting in a novel disease prototype learning framework. This approach enables VLMs to grasp the concepts of new disease effectively and efficiently, even with limited data. Extensive experiments across multiple image modalities showcase notable enhancements in performance compared to existing techniques.
Disease-informed Adaptation of Vision-Language Models
[ "Zhang, Jiajin", "Wang, Ge", "Kalra, Mannudeep K.", "Yan, Pingkun" ]
Conference
2405.15728
[ "https://github.com/RPIDIAL/Disease-informed-VLM-Adaptation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
374
null
https://papers.miccai.org/miccai-2024/paper/0321_paper.pdf
@InProceedings{ Mia_Cross_MICCAI2024, author = { Miao, Juzheng and Chen, Cheng and Zhang, Keli and Chuai, Jie and Li, Quanzheng and Heng, Pheng-Ann }, title = { { Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Semi-supervised learning (SSL) has achieved notable progress in medical image segmentation. To achieve effective SSL, a model needs to be able to efficiently learn from limited labeled data and effectively exploiting knowledge from abundant unlabeled data. Recent developments in visual foundation models, such as the Segment Anything Model (SAM), have demonstrated remarkable adaptability with improved sample efficiency. To harness the power of foundation models for application in SSL, we propose a cross prompting consistency method with segment anything model (CPC-SAM) for semi-supervised medical image segmentation. Our method employs SAM’s unique prompt design and innovates a cross-prompting strategy within a dual-branch framework to automatically generate prompts and supervisions across two decoder branches, enabling effectively learning from both scarce labeled and valuable unlabeled data. We further design a novel prompt consistency regularization, to reduce the prompt position sensitivity and to enhance the output invariance under different prompts. We validate our method on two medical image segmentation tasks. The extensive experiments with different labeled-data ratios and modalities demonstrate the superiority of our proposed method over the state-of-the-art SSL methods, with more than 9% Dice improvement on the breast cancer segmentation task.
Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation
[ "Miao, Juzheng", "Chen, Cheng", "Zhang, Keli", "Chuai, Jie", "Li, Quanzheng", "Heng, Pheng-Ann" ]
Conference
2407.05416
[ "https://github.com/JuzhengMiao/CPC-SAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
375
null
https://papers.miccai.org/miccai-2024/paper/3035_paper.pdf
@InProceedings{ Wan_An_MICCAI2024, author = { Wang, Xiaocheng and Mekbib, D. B. and Zhou, Tian and Zhu, Junming and Zhang, Li and Cheng, Ruidong and Zhang, Jianmin and Ye, Xiangming and Xu, Dongrong }, title = { { An MR-Compatible Virtual Reality System for Assessing Neuronal Plasticity of Sensorimotor Neurons and Mirror Neurons } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Virtual reality (VR) assisted rehabilitation system is being used more commonly in supplementing upper extremities (UE) functional rehabilitation. Mirror therapy (MT) is reportedly a useful training in encouraging motor functional recovery. However, the majority of current systems are not compatible with magnetic resonance (MR) environments. Resting-state functional magnetic resonance imaging (rs-fMRI) data, for measuring neuronal recovery status, can only be collected by these systems after the participants have been done with the VR therapy. As a result, real-time observation of the brain in working status remains unattainable. To address this challenge, we developed a novel MR-compatible VR system for Assessment of UE motor functions (MR.VRA). Three different modes are provided adapting to a participant’s appropriate levels of sensorimotor cortex impairment, including a unilateral-contralateral mode, a unilateral-ipsilateral mode, and a unilateral-bilateral mode. Twenty healthy subjects were recruited to validate MR.VRA for UE function rehabilitation and assessment in three fMRI tasks. The results showed that MR.VRA succeeded in conducting the fMRI tasks in the MR scanner bore while stimulating the sensorimotor neurons and mirror neurons using its embedded therapies. The findings suggested that MR.VRA may be a promising alternative for assessing neurorehabilitation of stroke patients with UE motor function impairment in MR environment, which allows inspection of direct imaging evidence of activities of neurons in the cortices related to UE motor functions.
An MR-Compatible Virtual Reality System for Assessing Neuronal Plasticity of Sensorimotor Neurons and Mirror Neurons
[ "Wang, Xiaocheng", "Mekbib, D. B.", "Zhou, Tian", "Zhu, Junming", "Zhang, Li", "Cheng, Ruidong", "Zhang, Jianmin", "Ye, Xiangming", "Xu, Dongrong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
376
null
https://papers.miccai.org/miccai-2024/paper/1908_paper.pdf
@InProceedings{ Pra_3D_MICCAI2024, author = { Prabhakar, Chinmay and Shit, Suprosanna and Musio, Fabio and Yang, Kaiyuan and Amiranashvili, Tamaz and Paetzold, Johannes C. and Li, Hongwei Bran and Menze, Bjoern }, title = { { 3D Vessel Graph Generation Using Denoising Diffusion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Blood vessel networks, represented as 3D graphs, help predict disease biomarkers, simulate blood flow, and aid in synthetic image generation, relevant in both clinical and pre-clinical settings. However, generating realistic vessel graphs that correspond to an anatomy of interest is challenging. Previous methods aimed at generating vessel trees mostly in an autoregressive style and could not be applied to vessel graphs with cycles such as capillaries or specific anatomical structures such as the Circle of Willis. Addressing this gap, we introduce the first application of \textit{denoising diffusion models} in 3D vessel graph generation. Our contributions include a novel, two-stage generation method that sequentially denoises node coordinates and edges. We experiment with two real-world vessel datasets, consisting of microscopic capillaries and major cerebral vessels, and demonstrate the generalizability of our method for producing diverse, novel, and anatomically plausible vessel graphs.
3D Vessel Graph Generation Using Denoising Diffusion
[ "Prabhakar, Chinmay", "Shit, Suprosanna", "Musio, Fabio", "Yang, Kaiyuan", "Amiranashvili, Tamaz", "Paetzold, Johannes C.", "Li, Hongwei Bran", "Menze, Bjoern" ]
Conference
2407.05842
[ "https://github.com/chinmay5/vessel_diffuse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
377
null
https://papers.miccai.org/miccai-2024/paper/0627_paper.pdf
@InProceedings{ Fuj_EgoSurgeryPhase_MICCAI2024, author = { Fujii, Ryo and Hatano, Masashi and Saito, Hideo and Kajita, Hiroki }, title = { { EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Surgical phase recognition has gained significant attention due to its potential to offer solutions to numerous demands of the modern operating room. However, most existing methods concentrate on minimally invasive surgery (MIS), leaving surgical phase recognition for open surgery understudied. This discrepancy is primarily attributed to the scarcity of publicly available open surgery video datasets for surgical phase recognition. To address this issue, we introduce a new egocentric open surgery video dataset for phase recognition, named Egosurgery-Phase. This dataset comprises 15 hours of real open surgery videos spanning 9 distinct surgical phases all captured using an egocentric camera attached to the surgeon’s head. In addition to video, the Egosurgery-Phase offers eye gaze. As far as we know, it is the first real open surgery video dataset for surgical phase recognition publicly available. Furthermore, inspired by the notable success of masked autoencoders (MAEs) in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). Considering the regions where surgeons’ gaze focuses are often critical for surgical phase recognition (e.g., surgical field), in our GGMAE, the gaze information acts as an empirical semantic richness prior to guiding the masking process, promoting better attention to semantically rich spatial regions. GGMAE significantly improves the previous state-of-the-art recognition method (6.4% in Jaccard) and the masked autoencoder-based method (3.1% in Jaccard) on Egosurgery-Phase. The dataset will be released at https://github.com/Fujiry0/EgoSurgery.
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos
[ "Fujii, Ryo", "Hatano, Masashi", "Saito, Hideo", "Kajita, Hiroki" ]
Conference
2405.19644
[ "https://github.com/Fujiry0/EgoSurgery" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
378
null
https://papers.miccai.org/miccai-2024/paper/0143_paper.pdf
@InProceedings{ Zha_AFoundation_MICCAI2024, author = { Zhang, Xinru and Ou, Ni and Basaran, Berke Doga and Visentin, Marco and Qiao, Mengyun and Gu, Renyang and Ouyang, Cheng and Liu, Yaou and Matthews, Paul M. and Ye, Chuyang and Bai, Wenjia }, title = { { A Foundation Model for Brain Lesion Segmentation with Mixture of Modality Experts } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Brain lesion segmentation plays an essential role in neurological research and diagnosis. As brain lesions can be caused by various pathological alterations, different types of brain lesions tend to manifest with different characteristics on different imaging modalities. Due to this complexity, brain lesion segmentation methods are often developed in a task-specific manner. A specific segmentation model is developed for a particular lesion type and imaging modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for 3D brain lesion segmentation, which can automatically segment different types of brain lesions for input data of various imaging modalities. We formulate a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network combines the expert predictions and fosters expertise collaboration. Furthermore, we introduce a curriculum learning strategy during training to avoid the degeneration of each expert network and preserve their specialization. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models and provides promising generalization to unseen datasets.
A Foundation Model for Brain Lesion Segmentation with Mixture of Modality Experts
[ "Zhang, Xinru", "Ou, Ni", "Basaran, Berke Doga", "Visentin, Marco", "Qiao, Mengyun", "Gu, Renyang", "Ouyang, Cheng", "Liu, Yaou", "Matthews, Paul M.", "Ye, Chuyang", "Bai, Wenjia" ]
Conference
2405.10246
[ "https://github.com/ZhangxinruBIT/MoME" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
379
null
https://papers.miccai.org/miccai-2024/paper/2517_paper.pdf
@InProceedings{ Kim_Multimodal_MICCAI2024, author = { Kim, Junsik and Shi, Zhiyi and Jeong, Davin and Knittel, Johannes and Yang, Helen Y. and Song, Yonghyun and Li, Wanhua and Li, Yicong and Ben-Yosef, Dalit and Needleman, Daniel and Pfister, Hanspeter }, title = { { Multimodal Learning for Embryo Viability Prediction in Clinical IVF } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
In clinical In-Vitro Fertilization (IVF), identifying the most viable embryo for transfer is important to increasing the likelihood of a successful pregnancy. Traditionally, this process involves embryologists manually assessing embryos’ static morphological features at specific intervals using light microscopy. This manual evaluation is not only time-intensive and costly, due to the need for expert analysis, but also inherently subjective, leading to variability in the selection process. To address these challenges, we develop a multimodal model that leverages both time-lapse video data and Electronic Health Records (EHRs) to predict embryo viability. A key challenge of our research is to effectively combine time-lapse video and EHR data, given their distinct modality characteristic. We comprehensively analyze our multimodal model with various modality inputs and integration approaches. Our approach will enable fast and automated embryo viability predictions in scale for clinical IVF.
Multimodal Learning for Embryo Viability Prediction in Clinical IVF
[ "Kim, Junsik", "Shi, Zhiyi", "Jeong, Davin", "Knittel, Johannes", "Yang, Helen Y.", "Song, Yonghyun", "Li, Wanhua", "Li, Yicong", "Ben-Yosef, Dalit", "Needleman, Daniel", "Pfister, Hanspeter" ]
Conference
2410.15581
[ "https://github.com/mibastro/MMIVF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
380
null
https://papers.miccai.org/miccai-2024/paper/3005_paper.pdf
@InProceedings{ Yan_Airway_MICCAI2024, author = { Yang, Xuan and Chen, Lingyu and Zheng, Yuchao and Ma, Longfei and Chen, Fang and Ning, Guochen and Liao, Hongen }, title = { { Airway segmentation based on topological structure enhancement using multi-task learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Airway segmentation in chest computed tomography (CT) images is critical for tracheal disease diagnosis and surgical navigation. However, airway segmentation is challenging due to complex tree structures and branches of different sizes. To enhance airway integrity and reduce fractures during bronchus segmentation, we propose a novel network for airway segmentation, using centerline detection as an auxiliary task to enhance topology awareness. The network introduces a topology embedding interactive module to emphasize the geometric properties of tracheal connections and reduce bronchial breakage. In addition, the proposed topology-enhanced attention module captures contextual and spatial information to improve bronchioles segmentation. In this paper, we conduct qualitative and quantitative experiments on two public datasets. Compared to several state-of-the-art algorithms, our method outperforms in detecting terminal bronchi and ensuring the continuity of the entire trachea while maintaining comparable segmentation accuracy. Our code is available at https://github.com/xyang-11/airway_seg.
Airway segmentation based on topological structure enhancement using multi-task learning
[ "Yang, Xuan", "Chen, Lingyu", "Zheng, Yuchao", "Ma, Longfei", "Chen, Fang", "Ning, Guochen", "Liao, Hongen" ]
Conference
[ "https://github.com/xyang-11/airway_seg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
381
null
https://papers.miccai.org/miccai-2024/paper/0375_paper.pdf
@InProceedings{ Wu_MMRetinal_MICCAI2024, author = { Wu, Ruiqi and Zhang, Chenran and Zhang, Jianle and Zhou, Yi and Zhou, Tao and Fu, Huazhu }, title = { { MM-Retinal: Knowledge-Enhanced Foundational Pretraining with Fundus Image-Text Expertise } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Current fundus image analysis models are predominantly built for specific tasks relying on individual datasets. The learning process is usually based on data-driven paradigm without prior knowledge. To address this issue, we propose MM-Retinal, a multi-modal dataset that encompasses high-quality image-text pairs collected from professional fundus diagram books. Moreover, enabled by MM-Retinal, we present a novel Knowledge-enhanced foundational pretraining model which incorporates Fundus Image-Text expertise, called KeepFIT. It is designed with image similarity-guided text revision and mixed training strategy to infuse expert knowledge. Our proposed fundus foundation model achieves state-of-the-art performance across six unseen downstream tasks and holds excellent generalization ability in zero-shot and few-shot scenarios. MM-Retinal and KeepFIT are available at \href{https://github.com/lxirich/MM-Retinal}{here}
MM-Retinal: Knowledge-Enhanced Foundational Pretraining with Fundus Image-Text Expertise
[ "Wu, Ruiqi", "Zhang, Chenran", "Zhang, Jianle", "Zhou, Yi", "Zhou, Tao", "Fu, Huazhu" ]
Conference
2405.11793
[ "https://github.com/lxirich/MM-Retinal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
382
null
https://papers.miccai.org/miccai-2024/paper/3658_paper.pdf
@InProceedings{ Abd_ANew_MICCAI2024, author = { Abdelhalim, Ibrahim and Abou El-Ghar, Mohamed and Dwyer, Amy and Ouseph, Rosemary and Contractor, Sohail and El-Baz, Ayman }, title = { { A New Non-Invasive AI-Based Diagnostic System for Automated Diagnosis of Acute Renal Rejection in Kidney Transplantation: Analysis of ADC Maps Extracted from Matched 3D Iso-Regions of the Transplanted Kidney } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Acute allograft rejection poses a significant challenge in kidney transplantation, the primary remedy for end-stage renal disease. Timely detection is crucial for intervention and graft preservation. A notable obstacle involves ensuring consistency across Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) scanning protocols at various Tesla levels. To tackle this, we propose a novel, non-invasive framework for automated diagnosis of acute renal rejection using DW-MRI. Our method comprises several key steps: Initially, we register the segmented kidney across different scanners, aligning them from the cortex to the medulla. Afterwards, the Apparent Diffusion Coefficient (ADC) is estimated for the segmented kidney. Then, the ADC maps are partitioned into a 3D iso-surface from the cortex to the medulla using the fast-marching level sets method. Next, the Cumulative Distribution Function (CDF) of the ADC for each iso-surface is computed, and Spearman correlation is applied to these CDFs. Finally, we introduce a Transformer-based Correlations to Classes Converter (T3C) model to leverage these correlations for distinguishing between normal and acutely rejected transplants. Evaluation on a cohort of 94 subjects (40 with acute renal rejection and 54 control subjects) yields promising results, with a mean accuracy of 98.723%, a mean sensitivity of 97%, and a mean specificity of 100%, employing a leave-one-subject testing approach. These findings underscore the effectiveness and robustness of our proposed framework.
A New Non-Invasive AI-Based Diagnostic System for Automated Diagnosis of Acute Renal Rejection in Kidney Transplantation: Analysis of ADC Maps Extracted from Matched 3D Iso-Regions of the Transplanted Kidney
[ "Abdelhalim, Ibrahim", "Abou El-Ghar, Mohamed", "Dwyer, Amy", "Ouseph, Rosemary", "Contractor, Sohail", "El-Baz, Ayman" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
383
null
https://papers.miccai.org/miccai-2024/paper/2156_paper.pdf
@InProceedings{ Li_SiFT_MICCAI2024, author = { Li, Xuyang and Zhang, Weizhuo and Yu, Yue and Zheng, Wei-Shi and Zhang, Tong and Wang, Ruixuan }, title = { { SiFT: A Serial Framework with Textual Guidance for Federated Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Deep learning has been extensively used in various medi- cal scenarios. However, the data-hungry nature of deep learning poses significant challenges in the medical domain, where data is often private, scarce, and imbalanced. Federated learning emerges as a solution to this paradox. Federated learning aims to collaborate multiple data owners (i.e., clients) for training a unified model without requiring clients to share their private data with others. In this study, we propose an innovative framework called SiFT (Serial Framework with Textual guidance) for federated learning. In our framework, the model is trained in a cyclic sequential manner inspired by the study of continual learning. In particular, with a continual learning strategy which employs a long-term model and a short-term model to emulate human’s long-term and short-term memory, class knowledge across clients can be effectively accumulated through the serial learning process. In addition, one pre-trained biomedical language model is utilized to guide the training of the short-term model by embedding textual prior knowledge of each image class into the classifier head. Experimental evaluations on three public medical image datasets demonstrate that the proposed SiFT achieves superior performance with lower communication cost compared to traditional federated learning methods. The source code is available at https://openi.pcl.ac.cn/OpenMedIA/SiFT.git.
SiFT: A Serial Framework with Textual Guidance for Federated Learning
[ "Li, Xuyang", "Zhang, Weizhuo", "Yu, Yue", "Zheng, Wei-Shi", "Zhang, Tong", "Wang, Ruixuan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
384
null
https://papers.miccai.org/miccai-2024/paper/2217_paper.pdf
@InProceedings{ Kuj_Label_MICCAI2024, author = { Kujawa, Aaron and Dorent, Reuben and Ourselin, Sebastien and Vercauteren, Tom }, title = { { Label merge-and-split: A graph-colouring approach for memory-efficient brain parcellation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Whole brain parcellation requires inferring hundreds of segmentation labels in large image volumes and thus presents significant practical challenges for deep learning approaches. We introduce label merge-and-split, a method that first greatly reduces the effective number of labels required for learning-based whole brain parcellation and then recovers original labels. Using a greedy graph colouring algorithm, our method automatically groups and merges multiple spatially separate labels prior to model training and inference. The merged labels may be semantically unrelated. A deep learning model is trained to predict merged labels. At inference time, original labels are restored using atlas-based influence regions. In our experiments, the proposed approach reduces the number of labels by up to 68% while achieving segmentation accuracy comparable to the baseline method without label merging and splitting. Moreover, model training and inference times as well as GPU memory requirements were reduced significantly. The proposed method can be applied to all semantic segmentation tasks with a large number of spatially separate classes within an atlas-based prior.
Label merge-and-split: A graph-colouring approach for memory-efficient brain parcellation
[ "Kujawa, Aaron", "Dorent, Reuben", "Ourselin, Sebastien", "Vercauteren, Tom" ]
Conference
2404.10572
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
385
null
https://papers.miccai.org/miccai-2024/paper/1201_paper.pdf
@InProceedings{ Gha_XTranPrune_MICCAI2024, author = { Ghadiri, Ali and Pagnucco, Maurice and Song, Yang }, title = { { XTranPrune: eXplainability-aware Transformer Pruning for Bias Mitigation in Dermatological Disease Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Numerous studies have demonstrated the effectiveness of deep learning models in medical image analysis. However, these models often exhibit performance disparities across different demographic cohorts, undermining their trustworthiness in clinical settings. While previous efforts have focused on bias mitigation techniques for traditional encoders, the increasing use of transformers in the medical domain calls for novel fairness enhancement methods. Additionally, the efficacy of explainability methods in improving model fairness remains unexplored. To address these gaps, we introduce XTranPrune, a bias mitigation method tailored for vision transformers. Leveraging state-of-the-art explainability techniques, XTranPrune generates a pruning mask to remove discriminatory modules while preserving performance-critical ones. Our experiments on two skin lesion datasets demonstrate the superior performance of XTranPrune across multiple fairness metrics. The code can be found at https://github.com/AliGhadirii/XTranPrune.
XTranPrune: eXplainability-aware Transformer Pruning for Bias Mitigation in Dermatological Disease Classification
[ "Ghadiri, Ali", "Pagnucco, Maurice", "Song, Yang" ]
Conference
[ "https://github.com/AliGhadirii/XTranPrune" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
386
null
https://papers.miccai.org/miccai-2024/paper/2832_paper.pdf
@InProceedings{ Zhe_Curriculum_MICCAI2024, author = { Zheng, Xiuqi and Zhang, Yuhang and Zhang, Haoran and Liang, Hongrui and Bao, Xueqi and Jiang, Zhuqing and Lao, Qicheng }, title = { { Curriculum Prompting Foundation Models for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Adapting large pre-trained foundation models, e.g., SAM, for medical image segmentation remains a significant challenge. A crucial step involves the formulation of a series of specialized prompts that incorporate specific clinical instructions. Past works have been heavily reliant on a singular type of prompt for each instance, necessitating manual input of an ideally correct prompt, which is less efficient. To tackle this issue, we propose to utilize prompts of different granularity, which are sourced from original images to provide a broader scope of clinical insights. However, combining prompts of varying types can pose a challenge due to potential conflicts. In response, we have designed a coarse-to-fine mechanism, referred to as curriculum prompting, that progressively integrates prompts of different types. Through extensive experiments on three public medical datasets across various modalities, we demonstrate the effectiveness of our proposed approach, which not only automates the prompt generation process but also yields superior performance compared to other SAM-based medical image segmentation methods. Code will be available at: https://github.com/AnnaZzz-zxq/Curriculum-Prompting.
Curriculum Prompting Foundation Models for Medical Image Segmentation
[ "Zheng, Xiuqi", "Zhang, Yuhang", "Zhang, Haoran", "Liang, Hongrui", "Bao, Xueqi", "Jiang, Zhuqing", "Lao, Qicheng" ]
Conference
2409.00695
[ "https://github.com/AnnaZzz-zxq/Curriculum-Prompting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
387
null
https://papers.miccai.org/miccai-2024/paper/1682_paper.pdf
@InProceedings{ Pan_HemodynamicDriven_MICCAI2024, author = { Pan, Xiang and Nie, Shiyun and Lv, Tianxu and Li, Lihua }, title = { { Hemodynamic-Driven Multi-Prototypes Learning for One-Shot Segmentation in Breast Cancer DCE-MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
In dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast, tumor segmentation is pivotal in screening and prognostic evaluation. However, automated segmentation is typically limited by a large amount of fully annotated data, and the multi-connected regions and complicated contours of tumors also pose a significant challenge. Existing few-shot segmentation methods tend to overfit the targets of base categories, resulting in inaccurate segmentation boundaries. In this work, we propose a hemodynamic-driven multi-prototypes network (HDMPNet) for one-shot segmentation that generates high-quality segmentation maps even for tumors of variable size, appearance, and shape. Specifically, a parameter-free module, called adaptive superpixel clustering (ASC), is designed to extract multi-prototypes by aggregating similar feature vectors for the multi-connected regions. Moreover, we develop a cross-fusion decoder (CFD) for optimizing boundary segmentation, which involves reweighting and aggregating support and query features. Besides, a bidirectional Gate Recurrent Unit is employed to acquire pharmacokinetic knowledge, subsequently driving the ASC and CFD modules. Experiments on two public breast cancer datasets show that our method yields higher segmentation performance than the existing state-of-the-art methods. The source code will be available on https://github.com/Medical-AI-Lab-of-JNU/HDMP.
Hemodynamic-Driven Multi-Prototypes Learning for One-Shot Segmentation in Breast Cancer DCE-MRI
[ "Pan, Xiang", "Nie, Shiyun", "Lv, Tianxu", "Li, Lihua" ]
Conference
[ "https://github.com/Medical-AI-Lab-of-JNU/HDMP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
388
null
https://papers.miccai.org/miccai-2024/paper/3246_paper.pdf
@InProceedings{ Emr_Learning_MICCAI2024, author = { Emre, Taha and Chakravarty, Arunava and Lachinov, Dmitrii and Rivail, Antoine and Schmidt-Erfurth, Ursula and Bogunović, Hrvoje }, title = { { Learning Temporally Equivariance for Degenerative Disease Progression in OCT by Predicting Future Representations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Contrastive pretraining provides robust representations by ensuring their invariance to different image transformations while simultaneously preventing representational collapse. Equivariant contrastive learning, on the other hand, provides representations sensitive to specific image transformations while remaining invariant to others. By introducing equivariance to time-induced transformations, such as disease-related anatomical changes in longitudinal imaging, the model can effectively capture such changes in the representation space. In this work, we propose a Time-equivariant Contrastive Learning (TC) method. First, an encoder embeds two unlabeled scans from different time points of the same patient into the representation space. Next, a temporal equivariance module is trained to predict the representation of a later visit based on the representation from one of the previous visits and the corresponding time interval with a novel regularization loss term while preserving the invariance property to irrelevant image transformations. On a large longitudinal dataset, our model clearly outperforms existing equivariant contrastive methods in predicting progression from intermediate age-related macular degeneration (AMD) to advanced wet-AMD within a specified time-window.
Learning Temporally Equivariance for Degenerative Disease Progression in OCT by Predicting Future Representations
[ "Emre, Taha", "Chakravarty, Arunava", "Lachinov, Dmitrii", "Rivail, Antoine", "Schmidt-Erfurth, Ursula", "Bogunović, Hrvoje" ]
Conference
2405.09404
[ "https://github.com/EmreTaha/TC-time_equivariant_disease_progression" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
389
null
https://papers.miccai.org/miccai-2024/paper/2374_paper.pdf
@InProceedings{ Zha_PhyDiff_MICCAI2024, author = { Zhang, Juanhua and Yan, Ruodan and Perelli, Alessandro and Chen, Xi and Li, Chao }, title = { { Phy-Diff: Physics-guided Hourglass Diffusion Model for Diffusion MRI Synthesis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Diffusion MRI (dMRI) is an important neuroimaging technique with high acquisition costs. Deep learning approaches have been used to enhance dMRI and predict diffusion biomarkers through undersampled dMRI. To generate more comprehensive raw dMRI, generative adversarial network based methods are proposed to include b-values and b-vectors as conditions, but they are limited by unstable training and less desirable diversity. The emerging diffusion model (DM) promises to improve generative performance. However, it remains challenging to include essential information in conditioning DM for more relevant generation, i.e., the physical principles of dMRI and white matter tract structures. In this study, we propose a physics-guided diffusion model to generate high-quality dMRI. Our model introduces the physical principles of dMRI in the noise evolution in the diffusion process and introduce a query-based conditional mapping within the difussion model. In addition, to enhance the anatomical fine detials of the generation, we introduce the XTRACT atlas as prior of white matter tracts by adopting an adapter technique. Our experiment results show that our method outperforms other state-of-the-art methods and has the potential to advance dMRI enhancement.
Phy-Diff: Physics-guided Hourglass Diffusion Model for Diffusion MRI Synthesis
[ "Zhang, Juanhua", "Yan, Ruodan", "Perelli, Alessandro", "Chen, Xi", "Li, Chao" ]
Conference
2406.03002
[ "https://github.com/Caewinix/Phy-Diff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
390
null
https://papers.miccai.org/miccai-2024/paper/0532_paper.pdf
@InProceedings{ Zha_CryoSAM_MICCAI2024, author = { Zhao, Yizhou and Bian, Hengwei and Mu, Michael and Uddin, Mostofa R. and Li, Zhenyang and Li, Xiang and Wang, Tianyang and Xu, Min }, title = { { CryoSAM: Training-free CryoET Tomogram Segmentation with Foundation Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Cryogenic Electron Tomography (CryoET) is a useful imaging technology in structural biology that is hindered by its need for manual annotations, especially in particle picking. Recent works have endeavored to remedy this issue with few-shot learning or contrastive learning techniques. However, supervised training is still inevitable for them. We instead choose to leverage the power of existing 2D foundation models and present a novel, training-free framework, CryoSAM. In addition to prompt-based single-particle instance segmentation, our approach can automatically search for similar features, facilitating full tomogram semantic segmentation with only one prompt. CryoSAM is composed of two major parts: 1) a prompt-based 3D segmentation system that uses prompts to complete single-particle instance segmentation recursively with Cross-Plane Self-Prompting, and 2) a Hierarchical Feature Matching mechanism that efficiently matches relevant features with extracted tomogram features. They collaborate to enable the segmentation of all particles of one category with just one particle-specific prompt. Our experiments show that CryoSAM outperforms existing works by a significant margin and requires even fewer annotations in particle picking. Further visualizations demonstrate its ability when dealing with full tomogram segmentation for various subcellular structures. Our code is available at: https://github.com/xulabs/aitom
CryoSAM: Training-free CryoET Tomogram Segmentation with Foundation Models
[ "Zhao, Yizhou", "Bian, Hengwei", "Mu, Michael", "Uddin, Mostofa R.", "Li, Zhenyang", "Li, Xiang", "Wang, Tianyang", "Xu, Min" ]
Conference
[ "https://github.com/xulabs/aitom" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
391
null
https://papers.miccai.org/miccai-2024/paper/1942_paper.pdf
@InProceedings{ Yu_UrFound_MICCAI2024, author = { Yu, Kai and Zhou, Yang and Bai, Yang and Soh, Zhi Da and Xu, Xinxing and Goh, Rick Siow Mong and Cheng, Ching-Yu and Liu, Yong }, title = { { UrFound: Towards Universal Retinal Foundation Models via Knowledge-Guided Masked Modeling } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Retinal foundation models aim to learn generalizable representations from diverse retinal images, facilitating label-efficient model adaptation across various ophthalmic tasks. Despite their success, current retinal foundation models are generally restricted to a single imaging modality, such as Color Fundus Photography (CFP) or Optical Coherence Tomography (OCT), limiting their versatility. Moreover, these models may struggle to fully leverage expert annotations and overlook the valuable domain knowledge essential for domain-specific representation learning. To overcome these limitations, we introduce UrFound, a retinal foundation model designed to learn universal representations from both multimodal retinal images and domain knowledge. UrFound is equipped with a modality-agnostic image encoder and accepts either CFP or OCT images as inputs. To integrate domain knowledge into representation learning, we encode expert annotation in text supervision and propose a knowledge-guided masked modeling strategy for model pre-training. It involves reconstructing randomly masked patches of retinal images while predicting masked text tokens conditioned on the corresponding image. This approach aligns multimodal images and textual expert annotations within a unified latent space, facilitating generalizable and domain-specific representation learning. Experimental results demonstrate that UrFound exhibits strong generalization ability and data efficiency when adapting to various tasks in retinal image analysis. By training on ~180k retinal images, UrFound significantly outperforms the state-of-the-art retinal foundation model trained on up to 1.6 million unlabelled images across 8 public retinal datasets. Our code and data are available at https://github.com/yukkai/UrFound.
UrFound: Towards Universal Retinal Foundation Models via Knowledge-Guided Masked Modeling
[ "Yu, Kai", "Zhou, Yang", "Bai, Yang", "Soh, Zhi Da", "Xu, Xinxing", "Goh, Rick Siow Mong", "Cheng, Ching-Yu", "Liu, Yong" ]
Conference
2408.05618
[ "https://github.com/yukkai/UrFound" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
392
null
https://papers.miccai.org/miccai-2024/paper/2275_paper.pdf
@InProceedings{ Cur_Lobar_MICCAI2024, author = { Curiale, Ariel H. and San José Estépar, Raúl }, title = { { Lobar Lung Density Embeddings with a Transformer encoder (LobTe) to predict emphysema progression in COPD } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Emphysema is defined as an abnormal alveolar wall destruc- tion exhibits varied extent and distribution within the lung, leading to heterogeneous spatial emphysema distribution. The progression of emphysema leads to decreased gas exchange, resulting in clinical worsening, and has been associated with higher mortality. Despite the ability to diagnose emphysema on CT scans there are no methods to predict its evolution. Our study aims to propose and validate a novel prognostic lobe-based transformer (LobTe) model capable of capturing the complexity and spatial variability of emphysema progression. This model predicts the evolution of emphysema based on %LAA-950 measurements, thereby enhancing our understanding of Chronic Obstructive Pulmonary Disease (COPD). LobTe is specifically tailored to address the spatial heterogeneity in lung destruction via a transformer encoder using lobe embedding fingerprints to maintain global attention according to lobes’ positions. We trained and tested our model using data from 4,612 smokers, both with and without COPD, across all GOLD stages, who had complete baseline and 5-year follow-up data. Our findings from 1,830 COPDGene participants used for testing demonstrate the model’s effectiveness in predicting lung density evolution based on %LAA-950, achieving a Root Mean Squared Error (RMSE) of 2.957%, a correlation coefficient (ρ) of 0.643 and a coefficient of determination (R2) of 0.36. The model’s capability to predict changes in lung density over five years from baseline CT scans highlights its potential in the early identification of patients at risk of emphysema progression. Our results suggest that image embeddings derived from baseline CT scans effectively forecast emphysema progression by quantifying lung tissue loss.
Lobar Lung Density Embeddings with a Transformer encoder (LobTe) to predict emphysema progression in COPD
[ "Curiale, Ariel H.", "San José Estépar, Raúl" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
393
null
https://papers.miccai.org/miccai-2024/paper/1510_paper.pdf
@InProceedings{ Par_Automated_MICCAI2024, author = { Park, Robin Y. and Windsor, Rhydian and Jamaludin, Amir and Zisserman, Andrew }, title = { { Automated Spinal MRI Labelling from Reports Using a Large Language Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
We propose a general pipeline to automate the extraction of labels from radiology reports using large language models, which we validate on spinal MRI reports. The efficacy of our method is measured on two distinct conditions: spinal cancer and stenosis. Using open-source models, our method surpasses GPT-4 on a held-out set of reports. Furthermore, we show that the extracted labels can be used to train an imaging model to classify the identified conditions in the accompanying MR scans. Both the cancer and stenosis classifiers trained using automated labels achieve comparable performance to models trained using scans manually annotated by clinicians.
Automated Spinal MRI Labelling from Reports Using a Large Language Model
[ "Park, Robin Y.", "Windsor, Rhydian", "Jamaludin, Amir", "Zisserman, Andrew" ]
Conference
2410.17235
[ "https://github.com/robinyjpark/AutoLabelClassifier" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
394
null
https://papers.miccai.org/miccai-2024/paper/0053_paper.pdf
@InProceedings{ Jia_Cardiac_MICCAI2024, author = { Jiang, Haojun and Sun, Zhenguo and Jia, Ning and Li, Meng and Sun, Yu and Luo, Shaqi and Song, Shiji and Huang, Gao }, title = { { Cardiac Copilot: Automatic Probe Guidance for Echocardiography with World Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Echocardiography is the only technique capable of real-time imaging of the heart and is vital for diagnosing the majority of cardiac diseases. However, there is a severe shortage of experienced cardiac sonographers, due to the heart’s complex structure and significant operational challenges. To mitigate this situation, we present a Cardiac Copilot system capable of providing real-time probe movement guidance to assist less experienced sonographers in conducting freehand echocardiography. This system can enable non-experts, especially in primary departments and medically underserved areas, to perform cardiac ultrasound examinations, potentially improving global healthcare delivery. The core innovation lies in proposing a data-driven world model, named Cardiac Dreamer, for representing cardiac spatial structures. This world model can provide structure features of any cardiac planes around the current probe position in the latent space, serving as an precise navigation map for autonomous plane localization. We train our model with real-world ultrasound data and corresponding probe motion from 110 routine clinical scans with 151K sample pairs by three certified sonographers. Evaluations on three standard planes with 37K sample pairs demonstrate that the world model can reduce navigation errors by up to 33% and exhibit more stable performance.
Cardiac Copilot: Automatic Probe Guidance for Echocardiography with World Model
[ "Jiang, Haojun", "Sun, Zhenguo", "Jia, Ning", "Li, Meng", "Sun, Yu", "Luo, Shaqi", "Song, Shiji", "Huang, Gao" ]
Conference
2406.13165
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
395
null
https://papers.miccai.org/miccai-2024/paper/2609_paper.pdf
@InProceedings{ Aay_Fair_MICCAI2024, author = { Aayushman and Gaddey, Hemanth and Mittal, Vidhi and Chawla, Manisha and Gupta, Gagan Raj }, title = { { Fair and Accurate Skin Disease Image Classification by Alignment with Clinical Labels } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Deep learning models have achieved great success in automating skin lesion diagnosis. However, the ethnic disparity in these models’ predictions needs to be addressed before deployment of these models. We introduce a novel approach: PatchAlign, to enhance skin condition image classification accuracy and fairness through alignment with clinical text representations of skin conditions. PatchAlign uses Graph Optimal Transport (\texttt{GOT}) Loss as a regularizer to perform cross-domain alignment. The representations thus obtained are robust and generalize well across skin tones, even with limited training samples. To reduce the effect of noise/artifacts in clinical dermatology images, we propose a learnable Masked Graph Optimal Transport for cross-domain alignment that further improves the fairness metrics.
Fair and Accurate Skin Disease Image Classification by Alignment with Clinical Labels
[ "Aayushman", "Gaddey, Hemanth", "Mittal, Vidhi", "Chawla, Manisha", "Gupta, Gagan Raj" ]
Conference
[ "https://github.com/aayushmanace/PatchAlign24" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
396
null
https://papers.miccai.org/miccai-2024/paper/0501_paper.pdf
@InProceedings{ Li_PASTA_MICCAI2024, author = { Li, Yitong and Yakushev, Igor and Hedderich, Dennis M. and Wachinger, Christian }, title = { { PASTA: Pathology-Aware MRI to PET CroSs-modal TrAnslation with Diffusion Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Positron emission tomography (PET) is a well-established functional imaging technique for diagnosing brain disorders. However, PET’s high costs and radiation exposure limit its widespread use. In contrast, magnetic resonance imaging (MRI) does not have these limitations. Although it also captures neurodegenerative changes, MRI is a less sensitive diagnostic tool than PET. To close this gap, we aim to generate synthetic PET from MRI. Herewith, we introduce PASTA, a novel pathology-aware image translation framework based on conditional diffusion models. Compared to the state-of-the-art methods, PASTA excels in preserving both structural and pathological details in the target modality, which is achieved through its highly interactive dual-arm architecture and multi-modal condition integration. A cycle exchange consistency and volumetric generation strategy elevate PASTA’s capability to produce high-quality 3D PET scans. Our qualitative and quantitative results confirm that the synthesized PET scans from PASTA not only reach the best quantitative scores but also preserve the pathology correctly. For Alzheimer’s classification, the performance of synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET. Code is available at https://github.com/ai-med/PASTA.
PASTA: Pathology-Aware MRI to PET CroSs-modal TrAnslation with Diffusion Models
[ "Li, Yitong", "Yakushev, Igor", "Hedderich, Dennis M.", "Wachinger, Christian" ]
Conference
2405.16942
[ "https://github.com/ai-med/PASTA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
397
null
https://papers.miccai.org/miccai-2024/paper/1980_paper.pdf
@InProceedings{ Aza_EchoTracker_MICCAI2024, author = { Azad, Md Abulkalam and Chernyshov, Artem and Nyberg, John and Tveten, Ingrid and Lovstakken, Lasse and Dalen, Håvard and Grenne, Bjørnar and Østvik, Andreas }, title = { { EchoTracker: Advancing Myocardial Point Tracking in Echocardiography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Tissue tracking in echocardiography is challenging due to the complex cardiac motion and the inherent nature of ultrasound acquisitions. Although optical flow methods are considered state-of-the-art (SOTA), they struggle with long-range tracking, noise occlusions, and drift throughout the cardiac cycle. Recently, novel learning-based point tracking techniques have been introduced to tackle some of these issues. In this paper, we build upon these techniques and introduce EchoTracker, a two-fold coarse-to-fine model that facilitates the tracking of queried points on a tissue surface across ultrasound image sequences. The architecture contains a preliminary coarse initialization of the trajectories, followed by reinforcement iterations based on fine-grained appearance changes. It is efficient, light, and can run on mid-range GPUs. Experiments demonstrate that the model outperforms SOTA methods, with an average position accuracy of 67% and a median trajectory error of 2.86 pixels. Furthermore, we show a relative improvement of 25% when using our model to calculate the global longitudinal strain (GLS) in a clinical test-retest dataset compared to other methods. This implies that learning-based point tracking can potentially improve performance and yield a higher diagnostic and prognostic value for clinical measurements than current techniques. Our source code is available at: https://github.com//.
EchoTracker: Advancing Myocardial Point Tracking in Echocardiography
[ "Azad, Md Abulkalam", "Chernyshov, Artem", "Nyberg, John", "Tveten, Ingrid", "Lovstakken, Lasse", "Dalen, Håvard", "Grenne, Bjørnar", "Østvik, Andreas" ]
Conference
2405.08587
[ "https://github.com/riponazad/echotracker/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
398
null
https://papers.miccai.org/miccai-2024/paper/3535_paper.pdf
@InProceedings{ Sul_HAMILQA_MICCAI2024, author = { Sultan, K. M. Arefeen and Hisham, Md Hasibul Husain and Orkild, Benjamin and Morris, Alan and Kholmovski, Eugene and Bieging, Erik and Kwan, Eugene and Ranjan, Ravi and DiBella, Ed and Elhabian, Shireen Y. }, title = { { HAMIL-QA: Hierarchical Approach to Multiple Instance Learning for Atrial LGE MRI Quality Assessment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The accurate evaluation of left atrial fibrosis via high-quality 3D Late Gadolinium Enhancement (LGE) MRI is crucial for atrial fibrillation management but is hindered by factors like patient movement and imaging variability. The pursuit of automated LGE MRI quality assessment is critical for enhancing diagnostic accuracy, standardizing evaluations, and improving patient outcomes. The deep learning models aimed at automating this process face significant challenges due to the scarcity of expert annotations, high computational costs, and the need to capture subtle diagnostic details in highly variable images. This study introduces HAMIL-QA, a multiple instance learning (MIL) framework, designed to overcome these obstacles. HAMIL-QA employs a hierarchical bag and sub-bag structure that allows for targeted analysis within sub-bags and aggregates insights at the volume level. This hierarchical MIL approach reduces reliance on extensive annotations, lessens computational load, and ensures clinically relevant quality predictions by focusing on diagnostically critical image features. Our experiments show that HAMIL-QA surpasses existing MIL methods and traditional supervised approaches in accuracy, AUROC, and F1-Score on an LGE MRI scan dataset, demonstrating its potential as a scalable solution for LGE MRI quality assessment automation. The code is available at: https://github.com/arf111/HAMIL-QA
HAMIL-QA: Hierarchical Approach to Multiple Instance Learning for Atrial LGE MRI Quality Assessment
[ "Sultan, K. M. Arefeen", "Hisham, Md Hasibul Husain", "Orkild, Benjamin", "Morris, Alan", "Kholmovski, Eugene", "Bieging, Erik", "Kwan, Eugene", "Ranjan, Ravi", "DiBella, Ed", "Elhabian, Shireen Y." ]
Conference
2407.07254
[ "https://github.com/arf111/HAMIL-QA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
399