File size: 3,769 Bytes
899ed1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and coil sensitivity Reconstruction Network (JointICNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
SENSE_unet_num_filters: 16
SENSE_unet_num_pool_layers: 2
SENSE_unet_dropout_probability: 0.0
SENSE_unet_padding_size: 11
SENSE_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001713 +/- 0.006045 NMSE = 0.06275 +/- 0.127 PSNR = 29.52 +/- 6.327 SSIM = 0.7274 +/- 0.2913
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |