--- license: mit datasets: - laughingrice/Ultrasound_planewave_sos_inversion tags: - medical imaging - ultrasound Authors: - Micha Feigin - Daniel Freedman - Brian W. Anthony --- # Deep learning for speed of sound inversion in ultrasound imaging This repository contains the code and models for the following papers: 1. Feigin M, Freedman D, Anthony B. W. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng. 2020;67(4):1142-1151. doi:10.1109/TBME.2019.2931195 2. Feigin M, Zwecker M, Freedman D, Anthony BW. Detecting muscle activation using ultrasound speed of sound inversion with deep learning. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE; 2020:2092-2095. doi:10.1109/EMBC44109.2020.9175237 3. Feigin M, Freedman D, Anthony BW. Computing Speed-of-Sound from ultrasound: user-agnostic recovery and a new benchmark. IEEE Trans Biomed Eng. 2023; doi:10.1109/TBME.2023.3327147 This repository contain the network code and models for the algorithms and results contained in the paper. The code was tested under python 3.9. The anaconda environment is defined in environment.yml (setup environment with the `command conda env create -f environment.yml`) ## Data The dataset used is available on huggingface at https://huggingface.co/datasets/laughingrice/Ultrasound_planewave_sos_inversion Variables in the files are `[sample, layer, x/channel, y/sample]` order * `alpha_coeff` -- Alpha coefficient used for simulations, full resolution * `c0` -- Speed-of-sound used for simulations, full resolution * `data` -- Channel data (first 2048 samples, 64 active channels, first layer with flat plane wave, to match existing physical hardware were used for the results in the paper) * `dx` -- spatial dx value of `c0` and `alpha_coef` * `f` -- temporal sampling frequency of channel data (40MHz) ## Models Model files appearing under the `models` directory for results presented in the paper with teh matching execution parameters are as follows: * `tbme_sos.pt` -- network weights for the network presented in [1] * `python . --test_files data/supplamentary_sample.mat --test_fname tbme_sos.h5 --load_ver models/tbme_sos.pt --net_type tbme` * `embc_sos.pt` -- network weights for the network presented in [2] * `python . --test_files data/supplamentary_sample.mat --test_fname embc_sos.h5 --load_ver models/embc_sos.pt --net_type embc` * `tbme2_sos.pt` -- network weights for the network presented in [3] * `python . --test_files data/supplamentary_sample.mat --test_fname tbme2_sos.h5 --load_ver models/tbme2_sos.pt` * `tbme2_sos_rand_gain.pt` -- [3] trained to recover the speed-of-sound map with random gain profile and scaling * `python . --test_files data/supplamentary_sample.mat --test_fname tbme2_sos_gain.h5 --load_ver models/tbme2_sos_rand_gain.pt` * `tbme2_attn.pt` -- [3] trained to recover the attenuation coefficient * `python . --test_files data/supplamentary_sample.mat --test_fname tbme2_attn.h5 --load_ver models/tbme2_attn.pt --label_vars alpha_coeff` * `tbme2_sos_attn.pt` -- [3] trained to recover both the speed-of-sound map and attenuation coefficient * `python . --test_files data/supplamentary_sample.mat --test_fname tbme2_sos_attn.h5 --load_ver models/tbme2_sos_attn.pt --label_vars c0 alpha_coeff` * `tbme2_phase_sos.pt` -- [3] trained to recover the speed-of-sound map using the IQ phase component * `python . --test_files data/supplamentary_sample.mat --test_fname tbme2_phase_sos.h5 --load_ver models/tbme2_phase_sos.pt --phase_inv 1`