File size: 3,427 Bytes
0ba7454
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- VSNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
  results: []

---


## Model Overview

Variable-Splitting Net (VSNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.


## ATOMMIC: Training

To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```

## How to Use this Model

The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).


### Automatically instantiate the model

```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```

### Usage

You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.


## Model Architecture
```base
model:
  model_name: VSNet
  num_cascades: 10
  imspace_model_architecture: CONV
  imspace_in_channels: 2
  imspace_out_channels: 2
  imspace_conv_hidden_channels: 64
  imspace_conv_n_convs: 4
  imspace_conv_batchnorm: false
  dimensionality: 2
  reconstruction_loss:
    wasserstein: 1.0
```

## Training
```base
  optim:
    name: adamw
    lr: 1e-4
    betas:
      - 0.9
      - 0.999
    weight_decay: 0.0
    sched:
        name: InverseSquareRootAnnealing
        min_lr: 0.0
        last_epoch: -1
        warmup_ratio: 0.1

trainer:
  strategy: ddp_find_unused_parameters_false
  accelerator: gpu
  devices: 1
  num_nodes: 1
  max_epochs: 20
  precision: 16-mixed
  enable_checkpointing: false
  logger: false
  log_every_n_steps: 50
  check_val_every_n_epoch: -1
  max_steps: -1
```

## Performance

To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.

Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.

Results
-------

Evaluation against SENSE targets
--------------------------------
12x:  MSE = 0.001976 +/- 0.005902 NMSE = 0.07433 +/- 0.1106 PSNR = 28.51 +/- 5.793 SSIM = 0.7084 +/- 0.289


## Limitations

This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.


## References

[1] [ATOMMIC](https://github.com/wdika/atommic)

[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1