--- language: - en license: apache-2.0 library_name: atommic datasets: - StanfordKnees2019 thumbnail: null tags: - image-reconstruction - XPDNet - ATOMMIC - pytorch model-index: - name: REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM results: [] --- ## Model Overview XPDNet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset. ## ATOMMIC: Training To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version. ``` pip install atommic['all'] ``` ## How to Use this Model The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf). ### Automatically instantiate the model ```base pretrained: true checkpoint: https://huggingface.co/wdika/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic mode: test ``` ### Usage You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information. ## Model Architecture ```base model: model_name: XPDNet num_primal: 5 num_dual: 1 num_iter: 10 use_primal_only: true kspace_model_architecture: CONV kspace_in_channels: 2 kspace_out_channels: 2 dual_conv_hidden_channels: 16 dual_conv_num_dubs: 2 dual_conv_batchnorm: false image_model_architecture: MWCNN imspace_in_channels: 2 imspace_out_channels: 2 mwcnn_hidden_channels: 16 mwcnn_num_scales: 0 mwcnn_bias: true mwcnn_batchnorm: false normalize_image: true dimensionality: 2 reconstruction_loss: wasserstein: 1.0 ``` ## Training ```base optim: name: adamw lr: 1e-4 betas: - 0.9 - 0.999 weight_decay: 0.0 sched: name: InverseSquareRootAnnealing min_lr: 0.0 last_epoch: -1 warmup_ratio: 0.1 trainer: strategy: ddp_find_unused_parameters_false accelerator: gpu devices: 1 num_nodes: 1 max_epochs: 20 precision: 16-mixed enable_checkpointing: false logger: false log_every_n_steps: 50 check_val_every_n_epoch: -1 max_steps: -1 ``` ## Performance To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files. Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice. Results ------- Evaluation against SENSE targets -------------------------------- 12x: MSE = 0.002691 +/- 0.008089 NMSE = 0.1117 +/- 0.1955 PSNR = 27.18 +/- 5.768 SSIM = 0.6544 +/- 0.2702 ## Limitations This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard. ## References [1] [ATOMMIC](https://github.com/wdika/atommic) [2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1