wdika commited on
Commit
32be327
1 Parent(s): 4966fc9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: atommic
6
+ datasets:
7
+ - BraTS2023AdultGlioma
8
+ thumbnail: null
9
+ tags:
10
+ - image-segmentation
11
+ - AttentionUNet
12
+ - ATOMMIC
13
+ - pytorch
14
+ model-index:
15
+ - name: SEG_AttentionUNet_BraTS2023AdultGlioma
16
+ results: []
17
+
18
+ ---
19
+
20
+
21
+ ## Model Overview
22
+
23
+ AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
24
+
25
+
26
+ ## ATOMMIC: Training
27
+
28
+ To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
29
+ ```
30
+ pip install atommic['all']
31
+ ```
32
+
33
+ ## How to Use this Model
34
+
35
+ The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
36
+
37
+ Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
38
+
39
+ ### Automatically instantiate the model
40
+
41
+ ```base
42
+ pretrained: true
43
+ checkpoint: https://huggingface.co/wdika/SEG_AttentionUNet_BraTS2023AdultGlioma/blob/main/SEG_AttentionUNet_BraTS2023AdultGlioma.atommic
44
+ mode: test
45
+ ```
46
+
47
+ ### Usage
48
+
49
+ You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
50
+
51
+
52
+ ## Model Architecture
53
+ ```base
54
+ model:
55
+ model_name: SEGMENTATIONATTENTIONUNET
56
+ segmentation_module: AttentionUNet
57
+ segmentation_module_input_channels: 4
58
+ segmentation_module_output_channels: 4
59
+ segmentation_module_channels: 32
60
+ segmentation_module_pooling_layers: 5
61
+ segmentation_module_dropout: 0.0
62
+ segmentation_module_normalize: false
63
+ segmentation_module_norm_groups: 2
64
+ segmentation_loss:
65
+ dice: 1.0
66
+ dice_loss_include_background: true # always set to true if the background is removed
67
+ dice_loss_to_onehot_y: false
68
+ dice_loss_sigmoid: false
69
+ dice_loss_softmax: false
70
+ dice_loss_other_act: none
71
+ dice_loss_squared_pred: false
72
+ dice_loss_jaccard: false
73
+ dice_loss_flatten: false
74
+ dice_loss_reduction: mean_batch
75
+ dice_loss_smooth_nr: 1e-5
76
+ dice_loss_smooth_dr: 1e-5
77
+ dice_loss_batch: true
78
+ dice_metric_include_background: true # always set to true if the background is removed
79
+ dice_metric_to_onehot_y: false
80
+ dice_metric_sigmoid: false
81
+ dice_metric_softmax: false
82
+ dice_metric_other_act: none
83
+ dice_metric_squared_pred: false
84
+ dice_metric_jaccard: false
85
+ dice_metric_flatten: false
86
+ dice_metric_reduction: mean_batch
87
+ dice_metric_smooth_nr: 1e-5
88
+ dice_metric_smooth_dr: 1e-5
89
+ dice_metric_batch: true
90
+ segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
91
+ segmentation_activation: sigmoid
92
+ magnitude_input: true
93
+ log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
94
+ normalization_type: minmax
95
+ normalize_segmentation_output: true
96
+ complex_data: false
97
+ ```
98
+
99
+ ## Training
100
+ ```base
101
+ optim:
102
+ name: adam
103
+ lr: 1e-4
104
+ betas:
105
+ - 0.9
106
+ - 0.98
107
+ weight_decay: 0.0
108
+ sched:
109
+ name: InverseSquareRootAnnealing
110
+ min_lr: 0.0
111
+ last_epoch: -1
112
+ warmup_ratio: 0.1
113
+
114
+ trainer:
115
+ strategy: ddp
116
+ accelerator: gpu
117
+ devices: 1
118
+ num_nodes: 1
119
+ max_epochs: 10
120
+ precision: 16-mixed
121
+ enable_checkpointing: false
122
+ logger: false
123
+ log_every_n_steps: 50
124
+ check_val_every_n_epoch: -1
125
+ max_steps: -1
126
+ ```
127
+
128
+ ## Performance
129
+
130
+ Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
131
+
132
+ Results
133
+ -------
134
+
135
+ Evaluation
136
+ ----------
137
+ DICE = 0.9305 +/- 0.1257 F1 = 0.6481 +/- 0.7629 HD95 = 3.836 +/- 3.01 IOU = 0.5374 +/- 0.6617
138
+
139
+
140
+ ## Limitations
141
+
142
+ This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
143
+
144
+
145
+ ## References
146
+
147
+ [1] [ATOMMIC](https://github.com/wdika/atommic)
148
+
149
+ [2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023