xueh commited on
Commit
e08e476
·
verified ·
1 Parent(s): 8b6719c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -84
README.md CHANGED
@@ -1,83 +1,25 @@
1
- Model summary
2
-
3
- Developer
4
-
5
- Microsoft Research, Health Futures
6
-
7
- Description
8
-
9
- SNRAware models improve the signal-noise-ratio of MR complex images. It is provided as research-only model for reproducibility of the corresponding research.
10
-
11
- Model architecture
12
-
13
- This model is an instantiation of the Imaging Transformer architectures, where local, global and inter-frame signal and noise characteristics are learned.
14
-
15
- Parameters
16
-
17
- SNRAware-small: 27.7million parameters
18
-
19
- SNRAware-medium: 55.1million parameters
20
-
21
- SNRAware-large: 109million parameters
22
-
23
- Inputs
24
-
25
- Input to model is 5D tensor [B, C, T/F, H, W] for batch, channel, time/frame, height and width. The last channel in input is the g-factor map.
26
-
27
- Context length
28
-
29
- Not applicable
30
-
31
- Outputs
32
-
33
- Output tensor is in the shape of [B, C-1, T/F, H, W].
34
-
35
- GPUs
36
-
37
- 16x B200
38
-
39
- Training time
40
-
41
- 7 days
42
-
43
- Public data summary (or summaries)
44
-
45
- N/A
46
-
47
- Dates
48
-
49
- Nov 2025
50
-
51
- Status
52
-
53
- Model checkpoints can be downloaded from https://huggingface.co/microsoft/SNRAware
54
-
55
- Model may be subject to updates post-release.
56
-
57
- Release date
58
-
59
- Release date in the EU (if different)
60
-
61
- Dec 2025
62
-
63
- License
64
-
65
- MIT license
66
-
67
- Model dependencies:
68
-
69
- N/A
70
-
71
- List and link to any additional related assets
72
-
73
- https://github.com/microsoft/SNRAware/
74
-
75
- Acceptable use policy
76
-
77
- N/A
78
-
79
 
80
-
81
  1. Model overview
82
 
83
  SNRAware is an imaging transformer model trained to denoise complex MR image data. Imaging transformers use attention modules to capture local, global, and inter-frame signal and noise characteristics. Denoising training used the SNRAware method, generating MR-realistic noises on the fly to create low SNR samples with unitary noise scaling. Model received low SNR complex images and g-factor maps as input, producing high SNR complex images as output. It is provided as research-only model for reproducibility of the corresponding research.
@@ -98,8 +40,6 @@ This model is only suited to denoise complex MR images with the unitary noise sc
98
 
99
  Any deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using publicly available research benchmarks, the models and evaluations are intended for research use only and not intended for deployed use cases.
100
 
101
-
102
-
103
  2.3 Distribution channels
104
 
105
  Model source code is available at https://github.com/microsoft/SNRAware/
@@ -136,11 +76,9 @@ Model was tested on the hold-out test sets for quality metrics of PSNR and SSIM.
136
 
137
  Please refer to the publication for more results.
138
 
139
-
140
  4.1 Long context
141
 
142
  Not relevant
143
-
144
 
145
  4.2 Safety evaluation and red-teaming
146
 
@@ -153,5 +91,4 @@ This model is not a frontier model.
153
  6. Requests for additional information may be directed to MSFTAIActRequest@microsoft.com.
154
 
155
  7. Appendix
156
-
157
  None
 
1
+ # Model summary
2
+
3
+ | Syntax | Description |
4
+ | ----------- | ----------- |
5
+ | Developer | Microsoft Research, Health Futures |
6
+ | Description | SNRAware models improve the signal-noise-ratio of MR complex images. It is provided as research-only model for reproducibility of the corresponding research. |
7
+ | Model architecture | This model is an instantiation of the Imaging Transformer architectures, where local, global and inter-frame signal and noise characteristics are learned. |
8
+ | Parameters | SNRAware-small: 27.7million parameters; SNRAware-medium: 55.1million parameters; SNRAware-large: 109million parameters |
9
+ | Inputs | Input to model is 5D tensor [B, C, T/F, H, W] for batch, channel, time/frame, height and width. The last channel in input is the g-factor map. |
10
+ | Context length | Not applicable |
11
+ | Outputs | Output tensor is in the shape of [B, C-1, T/F, H, W]. |
12
+ | GPUs | 16x B200 |
13
+ | Training time | 7 days |
14
+ | Public data summary (or summaries) | Not applicable |
15
+ | Dates | Nov 2025 |
16
+ | Status | Model checkpoints can be downloaded from https://huggingface.co/microsoft/SNRAware. Model may be subject to updates post-release. |
17
+ | Release date; Release date in the EU (if different) | Dec 2025 |
18
+ | License | MIT license |
19
+ | Model dependencies: | N/A |
20
+ | List and link to any additional related assets | https://github.com/microsoft/SNRAware/ |
21
+ | Acceptable use policy | N/A |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
 
23
  1. Model overview
24
 
25
  SNRAware is an imaging transformer model trained to denoise complex MR image data. Imaging transformers use attention modules to capture local, global, and inter-frame signal and noise characteristics. Denoising training used the SNRAware method, generating MR-realistic noises on the fly to create low SNR samples with unitary noise scaling. Model received low SNR complex images and g-factor maps as input, producing high SNR complex images as output. It is provided as research-only model for reproducibility of the corresponding research.
 
40
 
41
  Any deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using publicly available research benchmarks, the models and evaluations are intended for research use only and not intended for deployed use cases.
42
 
 
 
43
  2.3 Distribution channels
44
 
45
  Model source code is available at https://github.com/microsoft/SNRAware/
 
76
 
77
  Please refer to the publication for more results.
78
 
 
79
  4.1 Long context
80
 
81
  Not relevant
 
82
 
83
  4.2 Safety evaluation and red-teaming
84
 
 
91
  6. Requests for additional information may be directed to MSFTAIActRequest@microsoft.com.
92
 
93
  7. Appendix
 
94
  None