Spaces:
Runtime error
title: TEDM
emoji: 🐨
colorFrom: purple
colorTo: yellow
sdk: gradio
sdk_version: 3.35.2
app_file: app.py
pinned: false
license: mit
Robust semi-supervised segmentation with timestep ensembling diffusion models
Results
Training data size | 1 (1%) | 3 (2%) | 6 (3%) | 12 (96%) | 197 (100%) |
---|---|---|---|---|---|
JSRT (labelled in-domain) | |||||
Baseline | 84.4 $\pm$ 5.4 | 91.7 $\pm$ 3.7 | 93.3 $\pm$ 2.9 | 95.3 $\pm$ 2.3 | 97.3 $\pm$ 1.2 |
LEDM | 90.8 $\pm$ 3.5 | 94.1 $\pm$ 1.6 | 95.5 $\pm$ 1.4 | 96.4 $\pm$ 1.4 | 97.0 $\pm$ 1.3 |
LEDMe | 93.7 $\pm$ 2.6 | 95.5 $\pm$ 1.5 | 96.7 $\pm$ 1.5 | 97.0 $\pm$ 1.1 | 97.6 $\pm$ 1.2 |
Ours | 93.1 $\pm$ 3.4 | 94.8 $\pm$ 1.4 | 95.8 $\pm$ 1.2 | 96.6 $\pm$ 1.1 | 97.3 $\pm$ 1.2 |
NIH (unlabelled in-domain) | |||||
Baseline | 68.5 $\pm$ 12.8 | 71.2 $\pm$ 15.1 | 71.4 $\pm$ 15.9 | 77.8 $\pm$ 14.0 | 81.5 $\pm$ 12.7 |
LEDM | 63.3 $\pm$ 12.2 | 78.0 $\pm$ 10.1 | 81.2 $\pm$ 9.3 | 85.9 $\pm$ 7.4 | 88.9 $\pm$ 5.9 |
LEDMe | 70.3 $\pm$ 11.4 | 78.3 $\pm$ 9.8 | 83.0 $\pm$ 8.6 | 84.4 $\pm$ 8.1 | 90.1 $\pm$ 5.3 |
Ours | 80.3 $\pm$ 9.0 | 86.4 $\pm$ 6.2 | 89.2 $\pm$ 5.5 | 91.3 $\pm$ 4.1 | 92.9 $\pm$ 3.2 |
Montgomery (out-of-domain) | |||||
Baseline | 77.1 $\pm$ 12.0 | 83.0 $\pm$ 12.2 | 80.9 $\pm$ 14.7 | 83.8 $\pm$ 14.9 | 94.1 $\pm$ 6.6 |
LEDM | 79.3 $\pm$ 8.1 | 85.9 $\pm$ 7.4 | 89.4 $\pm$ 6.7 | 92.3 $\pm$ 7.2 | 94.4 $\pm$ 7.2 |
LEDMe | 80.7 $\pm$ 6.6 | 86.3 $\pm$ 6.5 | 89.5 $\pm$ 5.9 | 91.2 $\pm$ 5.6 | 95.3 $\pm$ 4.0 |
Ours | 90.5 $\pm$ 5.3 | 91.4 $\pm$ 6.1 | 93.3 $\pm$ 6.0 | 94.6 $\pm$ 6.0 | 95.1 $\pm$ 6.9 |
Training
- training the backbone
python train.py --dataset CXR14 --data_dir <PATH TO CXR14 DATASET>
- our method
python train.py --experiment TEDM --data_dir <PATH TO JSRT DATASET> --n_labelled_images <TRAINING SET SIZE>
- LEDM method
python train.py --experiment LEDM --data_dir <PATH TO JSRT DATASET> --n_labelled_images <TRAINING SET SIZE>
- LEDMe method
python train.py --experiment LEDMe --data_dir <PATH TO JSRT DATASET> --n_labelled_images <TRAINING SET SIZE>
- baseline method
python train.py --experiment JSRT_baseline --data_dir <PATH TO JSRT DATASET> --n_labelled_images <TRAINING SET SIZE>
Testing
update
DATADIR
in pathsdataloaders/JSRT.py
,dataloaders/NIH.py
anddataloaders/Montgomery.py
NIHPATH
,NIHFILE
,MONPATH
andMONFILE
in pathsauxiliary/postprocessing/run_tests.py
andauxiliary/postprocessing/testing_shared_weights.py
for baseline and LEDM methods, run
python auxiliary/postprocessing/run_tests.py --experiment <PATH TO LOG FOLDER>
- for our method, run
python auxiliary/postprocessing/testing_shared_weights.py --experiment <PATH TO LOG FOLDER>
Figures and reporting
VS Code notebooks can be found in auxiliary/notebooks_and_reporting
.