title
stringlengths 1
382
| abstract
stringlengths 6
6.09k
⌀ |
---|---|
Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation | Airway segmentation is crucial for the examination, diagnosis, and prognosis
of lung diseases, while its manual delineation is unduly burdensome. To
alleviate this time-consuming and potentially subjective manual procedure,
researchers have proposed methods to automatically segment airways from
computerized tomography (CT) images. However, some small-sized airway branches
(e.g., bronchus and terminal bronchioles) significantly aggravate the
difficulty of automatic segmentation by machine learning models. In particular,
the variance of voxel values and the severe data imbalance in airway branches
make the computational module prone to discontinuous and false-negative
predictions. especially for cohorts with different lung diseases. Attention
mechanism has shown the capacity to segment complex structures, while fuzzy
logic can reduce the uncertainty in feature representations. Therefore, the
integration of deep attention networks and fuzzy theory, given by the fuzzy
attention layer, should be an escalated solution for better generalization and
robustness. This paper presents an efficient method for airway segmentation,
comprising a novel fuzzy attention neural network and a comprehensive loss
function to enhance the spatial continuity of airway segmentation. The deep
fuzzy set is formulated by a set of voxels in the feature map and a learnable
Gaussian membership function. Different from the existing attention mechanism,
the proposed channel-specific fuzzy attention addresses the issue of
heterogeneous features in different channels. Furthermore, a novel evaluation
metric is proposed to assess both the continuity and completeness of airway
structures. The efficiency, generalization and robustness of the proposed
method have been proved by training on normal lung disease while testing on
datasets of lung cancer, COVID-19 and pulmonary fibrosis. |
Microstructure estimation from diffusion-MRI: Compartmentalized models in permeable cellular tissue | Diffusion-weighted magnetic resonance imaging (DW-MRI) is used to
characterize brain tissue microstructure employing tissue-specific biophysical
models. A current limitation, however, is that most of the proposed models are
based on the assumption of negligible water exchange between the intra- and
extracellular compartments, which might not be valid in various brain tissues,
including unmyelinated axons, gray matter, and tumors. The purpose of this work
is to quantify the effect of membrane permeability on the estimates of two
popular models neglecting exchange, and compare their performance with a model
including exchange. To this aim, DW-MRI experiments were performed in
controlled environments with Monte-Carlo simulations. The DW-MRI signals were
generated in numerical substrates mimicking biological tissue made of spherical
cells with permeable membranes like cancerous tissue or the brain gray matter.
From these signals, the substrates properties were estimated using SANDI and
VERDICT, the two compartment-based models neglecting exchange, and CEXI, a new
model which includes exchange. Our results show that, in cellular permeable
tissue, the model with exchange outperformed models without exchange in the
estimation of the tissue properties by providing more stable estimates of cell
size, intracellular volume fraction and extracellular diffusion coefficient.
Moreover, the model with exchange estimated accurately the exchange time in the
range of permeability reported for cellular tissue. Finally, the simulations
performed in this work showed that the exchange between the intracellular and
the extracellular space cannot be neglected in permeable tissue with a
conventional PGSE sequence, to obtain accurate estimates. Consequently,
existing compartmentalized models of impermeable tissue cannot be used for
microstructure estimation of cellular permeable tissue. |
Patient-specific mean teacher UNet for enhancing PET image and low-dose PET reconstruction on RefleXion X1 biology-guided radiotherapy system | The RefleXion X1 is the first biology-guided radiotherapy (BgRT) system. Its
dual 90-degree PET detector collects fewer pair production events compared to a
full-ring diagnostic PET system. In the proposed BgRT workflow, a short scan is
acquired before treatment delivery to ensure image quality and consistency. The
shorter scan time, a quarter of the simulation scan time, also leads to fewer
coincidence events and hence reduced image quality. In this study, we proposed
a patient-specific mean teacher UNet (MT-UNet) to enhance PET image quality and
low-dose PET reconstruction on RefleXion X1. PET/CT scans of nine cancer
patients were acquired using RefleXion X1. Every patient had one simulation
scan. Five patients had additional scans acquired during the first and the
final treatment fractions. Treatment scans were acquired using the same imaging
protocol as the simulation scan. For each scan, we reconstructed a full-dose
image and evenly split coincidence events into four sessions to reconstruct
four quarter-dose PET images. For each patient, our proposed MT-UNet was
trained using quarter-dose and full-dose images of the simulation scan. For the
image quality enhancement task, we applied nine trained MT-UNets to full-dose
simulation PET images of the nine patients to generate enhanced images,
respectively. The enhanced images were compared with the original full-dose
images using CNR and SNR. For the low-dose image reconstruction task, we
applied five trained MT-UNets to ten quarter-dose treatment images of five
patients to predict full-dose images, respectively. The predicted and ground
truth full-dose images were compared using SSIM and PSNR. We also trained and
evaluated patient-specific UNets for model comparison. Our proposed
patient-specific MT-UNet achieved better performance in improving the quality
of RefleXion low-dose and full-dose images compared to the patient-specific
UNet. |
Sub-second photon dose prediction via transformer neural networks | Fast dose calculation is critical for online and real time adaptive therapy
workflows. While modern physics-based dose algorithms must compromise accuracy
to achieve low computation times, deep learning models can potentially perform
dose prediction tasks with both high fidelity and speed. We present a deep
learning algorithm that, exploiting synergies between Transformer and
convolutional layers, accurately predicts broad photon beam dose distributions
in few milliseconds. The proposed improved Dose Transformer Algorithm (iDoTA)
maps arbitrary patient geometries and beam information (in the form of a 3D
projected shape resulting from a simple ray tracing calculation) to their
corresponding 3D dose distribution. Treating the 3D CT input and dose output
volumes as a sequence of 2D slices along the direction of the photon beam,
iDoTA solves the dose prediction task as sequence modeling. The proposed model
combines a Transformer backbone routing long-range information between all
elements in the sequence, with a series of 3D convolutions extracting local
features of the data. We train iDoTA on a dataset of 1700 beam dose
distributions, using 11 clinical volumetric modulated arc therapy (VMAT) plans
(from prostate, lung and head and neck cancer patients with 194-354 beams per
plan) to assess its accuracy and speed. iDoTA predicts individual photon beams
in ~50 milliseconds with a high gamma pass rate of 97.72% (2 mm, 2%).
Furthermore, estimating full VMAT dose distributions in 6-12 seconds, iDoTA
achieves state-of-the-art performance with a 99.51% (2 mm, 2%) pass rate.
Offering the sub-second speed needed in online and real-time adaptive
treatments, iDoTA represents a new state of the art in data-driven photon dose
calculation. The proposed model can massively speed-up current photon
workflows, reducing calculation times from few minutes to just a few seconds. |
A probabilistic deep learning model of inter-fraction anatomical variations in radiotherapy | In radiotherapy, the internal movement of organs between treatment sessions
causes errors in the final radiation dose delivery. Motion models can be used
to simulate motion patterns and assess anatomical robustness before delivery.
Traditionally, such models are based on principal component analysis (PCA) and
are either patient-specific (requiring several scans per patient) or
population-based, applying the same deformations to all patients. We present a
hybrid approach which, based on population data, allows to predict
patient-specific inter-fraction variations for an individual patient. We
propose a deep learning probabilistic framework that generates deformation
vector fields (DVFs) warping a patient's planning computed tomography (CT) into
possible patient-specific anatomies. This daily anatomy model (DAM) uses few
random variables capturing groups of correlated movements. Given a new planning
CT, DAM estimates the joint distribution over the variables, with each sample
from the distribution corresponding to a different deformation. We train our
model using dataset of 312 CT pairs from 38 prostate cancer patients. For 2
additional patients (22 CTs), we compute the contour overlap between real and
generated images, and compare the sampled and ground truth distributions of
volume and center of mass changes. With a DICE score of 0.86 and a distance
between prostate contours of 1.09 mm, DAM matches and improves upon PCA-based
models. The distribution overlap further indicates that DAM's sampled movements
match the range and frequency of clinically observed daily changes on repeat
CTs. Conditioned only on a planning CT and contours of a new patient without
any pre-processing, DAM can accurately predict CTs seen during following
treatment sessions, which can be used for anatomically robust treatment
planning and robustness evaluation against inter-fraction anatomical changes. |
Application of the nnU-Net for automatic segmentation of lung lesion on CT images, and implication on radiomic models | Lesion segmentation is a crucial step of the radiomic workflow. Manual
segmentation requires long execution time and is prone to variability,
impairing the realisation of radiomic studies and their robustness. In this
study, a deep-learning automatic segmentation method was applied on computed
tomography images of non-small-cell lung cancer patients. The use of manual vs
automatic segmentation in the performance of survival radiomic models was
assessed, as well. METHODS A total of 899 NSCLC patients were included (2
proprietary: A and B, 1 public datasets: C). Automatic segmentation of lung
lesions was performed by training a previously developed architecture, the
nnU-Net, including 2D, 3D and cascade approaches. The quality of automatic
segmentation was evaluated with DICE coefficient, considering manual contours
as reference. The impact of automatic segmentation on the performance of a
radiomic model for patient survival was explored by extracting radiomic
hand-crafted and deep-learning features from manual and automatic contours of
dataset A, and feeding different machine learning algorithms to classify
survival above/below median. Models' accuracies were assessed and compared.
RESULTS The best agreement between automatic and manual contours with DICE=0.78
+(0.12) was achieved by averaging predictions from 2D and 3D models, and
applying a post-processing technique to extract the maximum connected
component. No statistical differences were observed in the performances of
survival models when using manual or automatic contours, hand-crafted, or deep
features. The best classifier showed an accuracy between 0.65 and 0.78.
CONCLUSION The promising role of nnU-Net for automatic segmentation of lung
lesions was confirmed, dramatically reducing the time-consuming physicians'
workload without impairing the accuracy of survival predictive models based on
radiomics. |
$^{18}$F-PSMA-1007 salivary gland dosimetry: Comparison between different methods for dose calculation and assessment of inter- and intra-patient variability | Dosimetry of salivary glands (SGs) is usually implemented using simplified
calculation approaches and approximated geometries. Our aims were to compare
different dosimetry methods to calculate SGs absorbed doses (ADs) following
18F-PSMA-1007 injection, and to assess the AD variation across patients and
single SG components. Five patients with prostate cancer recurrence underwent
PET/CT acquisitions of the head and neck, 0.5, 2 and 4 hours after
18F-PSMA-1007 injection. Parotid and submandibular glands were segmented on CT
to derive SGs volumes and masses, while PETs were used to derive
Time-Integrated Activity Coefficients. Average ADs to single SG components or
total SG (tSG) were calculated with the following methods: i) direct Monte
Carlo (MC) simulation with GATE/GEANT4; ii) spherical model (SM) of OLINDA/EXM
2.1, adopting either patient-specific or standard ICRP89 organ masses (SMstd);
iii) ellipsoidal model (EM); iv) MIRD approach with organ S-factors from
OLINDA/EXM 2.1 and OpenDose collaboration, with or without contribution from
cross irradiation originating outside the SGs. The maximum percent AD
difference across SG components ({\delta}max) and across patients ({\Delta}max)
were calculated. Compared to MC, ADs to single SG components were significantly
underestimated by all methods (average relative differences between -14.5% and
-30.4%). Using MC, SM and EM, {\delta}max were never below 25% (up to 113%).
{\delta}max up to 702% were obtained with SMstd. Concerning tSG, results within
10% of the MC were obtained only if cross irradiation from the remainder of the
body or from the remainder of the head was accounted for. The {\Delta}max
ranged between 58% and 78% across patients. Specific masses of single SG
components should always be considered given their large intra- and inter-
patient variability. |
Dosimetric Evaluation of a New Rotating Gamma System for Stereotactic Radiosurgery | Purpose: A novel rotating gamma stereotactic radiosurgery (SRS) system
(Galaxy RTi) with real-time image guidance technology has been developed for
high-precision SRS and frameless fractionated stereotactic radiotherapy (SRT).
This work investigated the dosimetric quality of Galaxy by comparing both the
machine treatment parameters and plan dosimetry parameters with those of the
widely used Leksell Gamma Knife (LGK) systems for SRS. Methods: The Galaxy RTi
system uses 30 cobalt-60 sources on a rotating gantry to deliver non-coplanar,
non-overlapping arcs simultaneously while the LGK 4C uses 201 static cobalt-60
sources to deliver noncoplanar beams. Ten brain cancer patients were unarchived
from our clinical database, which were previously treated on the LGK 4C. The
lesion volume for these cases varied from 0.1 cm3 to 15.4 cm3. Galaxy plans
were generated using the Prowess TPS (Prowess, Concord, CA) with the same dose
constraints and optimization parameters. Treatment quality metrics such as
target coverage (%volume receiving the prescription dose), conformity index
(CI), cone size, shots number, beam-on time were compared together with DVH
curves and dose distributions. Results: Superior treatment plans were generated
for the Galaxy system that met our clinical acceptance criteria. For the 10
patients investigated, the mean CI and dose coverage for Galaxy was 1.77 and
99.24 compared to 1.94 and 99.19 for LGK, respectively. The beam-on time for
Galaxy was 17.42 minutes compared to 21.34 minutes for LGK (both assuming dose
rates at the initial installation). The dose fall-off is much faster for
Galaxy, compared with LGK. Conclusion: The Galaxy RTi system can provide dose
distributions with similar quality to that of LGK with less beam-on time and
faster dose fall-off. The system is also capable of real-time image guidance at
treatment position to ensure accurate dose delivery for SRS. |
Range margin reduction in carbon ion therapy: potential benefits of using radioactive ion beams | Radiotherapy with heavy ions, in particular, 12C beams, is one of the most
advanced forms of cancer treatment. Sharp dose gradients and high biological
effectiveness in the target region make them an ideal tool to treat deep-seated
and radioresistant tumors, however, at the same time, sensitive to small errors
in the range prediction. Safety margins are added to the tumor volume to
mitigate these uncertainties and ensure its uniform coverage, but during the
irradiation they lead to unavoidable damage to the surrounding healthy tissue.
To fully exploit the benefits of a sharp Bragg peak, a large effort is put into
establishing precise range verification methods for the so-called image-guided
radiotherapy. Despite positron emission tomography being widely in use for this
purpose in 12C ion therapy, the low count rates, biological washout, and broad
shape of the activity distribution still limit its precision to a few
millimeters. Instead, radioactive beams used directly for treatment would yield
an improved signal and a closer match with the dose fall-off, potentially
enabling precise in vivo beam range monitoring. We have performed a treatment
planning study to estimate the possible impact of the reduced range
uncertainties, enabled by radioactive 11C beams treatments, on sparing critical
organs in the tumor proximity. We demonstrate that (i) annihilation maps for
11C ions can in principle reflect even millimeter shifts in dose distributions
in the patient, (ii) outcomes of treatment planning with 11C beams are
significantly improved in terms of meeting the constraints for the organs at
risk compared to 12C plans, and (iii) less severe toxicities for serial and
parallel critical organs can be expected following 11C treatment with reduced
range uncertainties, compared to 12C treatments. |
Another view of sequential sampling in the birth process with immigration | Models of counts-of-counts data have been extensively used in the biological
sciences, for example in cancer, population genetics, sampling theory and
ecology. In this paper we explore properties of one model that is embedded into
a continuous-time process and can describe the appearance of certain biological
data such as covid DNA sequences in a database. More specifically, we consider
an evolving model of counts-of-counts data that arises as the family size
counts of samples taken sequentially from a Birth process with Immigration
(BI). Here, each family represents a type or species, and the family size
counts represent the type or species frequency spectrum in the population. We
study the correlation of $S(a,b)$ and $S(c,d)$, the number of families observed
in two disjoint time intervals $(a,b)$ and $(c,d)$. We find the expected sample
variance and its asymptotics for $p$ consecutive sequential samples
$\mathbf{S}_p:=(S(t_0,t_1),\dots, S(t_{p-1},t_p))$, for any given
$0=t_0<t_1<\dots<t_p$. By conditioning on the sizes of the samples, we provide
a connection between $\mathbf{S}_p$ and $p$ sequential samples of sizes
$n_1,n_2,\dots,n_p$, drawn from a single run of a Chinese Restaurant Process.
The properties of the latter were studied in da Silva et al. (2022). We show
how the continuous-time framework helps to make asymptotic calculations easier
than its discrete-time counterpart. As an application, for a specific choice of
$t_1,t_2,\dots, t_p$, we revisit Fisher's 1943 multi-sampling problem and give
another explanation of what Fisher's model could have meant in the world of
sequential samples drawn from a BI process. |
Graph Attention Networks Unveil Determinants of Intra- and Inter-city Health Disparity | Understanding the determinants underlying variations in urban health status
is important for informing urban design and planning, as well as public health
policies. Multiple heterogeneous urban features could modulate the prevalence
of diseases across different neighborhoods in cities and across different
cities. This study examines heterogeneous features related to
socio-demographics, population activity, mobility, and the built environment
and their non-linear interactions to examine intra- and inter-city disparity in
prevalence of four disease types: obesity, diabetes, cancer, and heart disease.
Features related to population activity, mobility, and facility density are
obtained from large-scale anonymized mobility data. These features are used in
training and testing graph attention network (GAT) models to capture non-linear
feature interactions as well as spatial interdependence among neighborhoods. We
tested the models in five U.S. cities across the four disease types. The
results show that the GAT model can predict the health status of people in
neighborhoods based on the top five determinant features. The findings unveil
that population activity and built-environment features along with
socio-demographic features differentiate the health status of neighborhoods to
such a great extent that a GAT model could predict the health status using
these features with high accuracy. The results also show that the model trained
on one city can predict health status in another city with high accuracy,
allowing us to quantify the inter-city similarity and discrepancy in health
status. The model and findings provide novel approaches and insights for urban
designers, planners, and public health officials to better understand and
improve health disparities in cities by considering the significant determinant
features and their interactions. |
Discriminating between individual-based models of collective cell motion in a benchmark flow geometry using standardised spatiotemporal patterns | Collectively coordinated cell migration plays a role in tissue embryogenesis,
cancer, homeostasis and healing. To study these processes, different cell-based
modelling approaches have been developed, ranging from lattice-based cellular
automata to lattice-free models that treat cells as point-like particles or
extended detailed cell shape contours. In the spirit of what Osborne et al.
[PLOS Computational Biology, (2017) 13, 1-34] did for cellular tissue structure
simulation models, we here compare five simulation models of collective cell
migration, chosen to be representative in increasing order of included detail.
They are Vicsek-Gr\'egoire particles, Szab\'o-like particles, self-propelled
Voronoi model, cellular Potts model, and multiparticle cells, where each model
includes cell motility. We examine how these models compare when applied to the
same biological problem, and what differences in behaviour are due to different
model assumptions and abstractions. For that purpose, we use a benchmark that
discriminates between complex material flow models, and that can be
experimentally approached using cell cultures: the flow within a channel around
a circular obstacle, that is, the geometry Stokes used in his historical 1851
experiment. For each model we explain how to best implement it; vary cell
density, attraction force and alignment interaction; draw the resulting maps of
velocity, density and deformation fields; and eventually discuss its respective
advantages and limitations. We thus provide a recommendation on how to select a
model to answer a given question, and we examine whether models of motile
particles and motile cells display similar collective effects. |
Three-component contour dynamics model to simulate and analyze amoeboid cell motility | Amoeboid cell motility is relevant in a wide variety of biomedical
applications such as wound healing, cancer metastasis, and embryonic
morphogenesis. It is characterized by pronounced changes of the cell shape
associated with expansions and retractions of the cell membrane, which result
in a crawling kind of locomotion. Despite existing computational models of
amoeboid motion, the inference of expansion and retraction components of
individual cells, the corresponding classification of cells, and the a priori
specification of the parameter regime to achieve a specific motility behavior
remain challenging open problems. We propose a novel model of the
spatio-temporal evolution of two-dimensional cell contours comprising three
biophysiologically motivated components: a stochastic term accounting for
membrane protrusions and two deterministic terms accounting for membrane
retractions by regularizing the shape and area of the contour. Mathematically,
these correspond to the intensity of a self-exciting Poisson point process, the
area-preserving curve-shortening flow, and an area adjustment flow. The model
is used to generate contour data for a variety of qualitatively different,
e.g., polarized and non-polarized, cell tracks that are hardly distinguishable
from experimental data. In application to experimental cell tracks, we inferred
the protrusion component and examined its correlation to commonly used
biomarkers: the actin concentration close to the membrane and its local motion.
Due to the low model complexity, parameter estimation is fast, straightforward
and offers a simple way to classify contour dynamics based on two locomotion
types: the amoeboid and a so-called fan-shaped type. For both types, we use
cell tracks segmented from fluorescence imaging data of the model organism D.
discoideum. An implementation of the model is provided within the open-source
software package AmoePy. |
Extracting lung function-correlated information from CT-encoded static textures | The inherent characteristics of lung tissues, which are independent of
breathing manoeuvre, may provide fundamental information on lung function. This
paper attempted to study function-correlated lung textures and their spatial
distribution from CT. 21 lung cancer patients with thoracic 4DCT scans,
DTPA-SPECT ventilation images (V), and available pulmonary function test (PFT)
measurements were collected. 79 radiomic features were included for analysis,
and a sparse-to-fine strategy including subregional feature discovery and
voxel-wise feature distribution study was carried out to identify the
function-correlated radiomic features. At the subregion level, lung CT images
were partitioned and labeled as defected/non-defected patches according to
reference V. At the voxel-wise level, feature maps (FMs) of selected feature
candidates were generated for each 4DCT phase. Quantitative metrics, including
Spearman coefficient of correlation (SCC) and Dice similarity coefficient (DSC)
for FM-V spatial agreement assessments, intra-class coefficient of correlation
(ICC) for FM robustness evaluations, and FM-PFT comparisons, were applied to
validate the results. At the subregion level, eight function-correlated
features were filtered out with medium-to-large statistical strength (effect
size>0.330) to differentiate defected/non-defected lung regions. At the
voxel-wise level, FMs of candidates yielded moderate-to-strong voxel-wise
correlations with reference V. Among them, FMs of GLDM Dependence
Non-uniformity showed the highest robust (ICC=0.96) spatial correlation, with
median SCCs ranging from 0.54 to 0.59 throughout ten phases. Its phase-averaged
FM achieved a median SCC of 0.60, the median DSC of 0.60/0.65 for high/low
functional lung volumes, respectively, and the correlation of 0.646 between the
spatially averaged feature values and PFT measurements. |
Deep Learning-based Protoacoustic Signal Denoising for Proton Range Verification | Objective: Proton therapy offers an advantageous dose distribution compared
to the photon therapy, since it deposits most of the energy at the end of
range, namely the Bragg peak (BP). Protoacoustic technique was developed to in
vivo determine the BP locations. However, it requires large dose delivery to
the tissue to obtain an averaged acoustic signal with a sufficient signal to
noise ratio (SNR), which is not suitable in clinics. We propose a deep
learning-based technique to acquire denoised acoustic signals and reduce BP
range uncertainty with much lower doses. Approach: Three accelerometers were
placed on the distal surface of a cylindrical polyethylene (PE) phantom to
collect protoacoustic signals. In total 512 raw signals were collected at each
device. Device-specific stack autoencoder (SAE) denoising models were trained
to denoise the input signals, which were generated by averaging 1, 2, 4, 8, 16,
or 32 raw signals. Both supervised and unsupervised learning training
strategies were tested for comparison. Mean squared error (MSE),
signal-to-noise ratio (SNR) and the Bragg peak (BP) range uncertainty were used
for model evaluation. Main results: After SAE denoising, the MSE was
substantially reduced, and the SNR was enhanced. Overall, the supervised SAEs
outperformed the unsupervised SAEs in BP range verification. For the high
accuracy detector, it achieved a BP range uncertainty of 0.20 +/- 3.44 mm by
averaging over 8 raw signals, while for the other two low accuracy detectors,
they achieved the BP uncertainty of 1.44 +/- 6.45 mm and -0.23 +/- 4.88 mm by
averaging 16 raw signals, respectively. Significance: We have proposed a deep
learning based denoising method to enhance the SNR of protoacoustic
measurements and improve the accuracy in BP range verification, which greatly
reduces the dose and time for potential clinical applications. |
TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks | Deep learning models have been developed for a variety of tasks and are
deployed every day to work in real conditions. Some of these tasks are critical
and models need to be trusted and safe, e.g. military communications or cancer
diagnosis. These models are given real data, simulated data or combination of
both and are trained to be highly predictive on them. However, gathering enough
real data or simulating them to be representative of all the real conditions
is: costly, sometimes impossible due to confidentiality and most of the time
impossible. Indeed, real conditions are constantly changing and sometimes are
intractable. A solution is to deploy machine learning models that are able to
give predictions when they are confident enough otherwise raise a flag or
abstain. One issue is that standard models easily fail at detecting
out-of-distribution samples where their predictions are unreliable.
We present here TrustGAN, a generative adversarial network pipeline targeting
trustness. It is a deep learning pipeline which improves a target model
estimation of the confidence without impacting its predictive power. The
pipeline can accept any given deep learning model which outputs a prediction
and a confidence on this prediction. Moreover, the pipeline does not need to
modify this target model. It can thus be easily deployed in a MLOps (Machine
Learning Operations) setting.
The pipeline is applied here to a target classification model trained on
MNIST data to recognise numbers based on images. We compare such a model when
trained in the standard way and with TrustGAN. We show that on
out-of-distribution samples, here FashionMNIST and CIFAR10, the estimated
confidence is largely reduced. We observe similar conclusions for a
classification model trained on 1D radio signals from AugMod, tested on
RML2016.04C. We also publicly release the code. |
Automated Deep Aberration Detection from Chromosome Karyotype Images | Chromosome analysis is essential for diagnosing genetic disorders. For
hematologic malignancies, identification of somatic clonal aberrations by
karyotype analysis remains the standard of care. However, karyotyping is costly
and time-consuming because of the largely manual process and the expertise
required in identifying and annotating aberrations. Efforts to automate
karyotype analysis to date fell short in aberration detection. Using a training
set of ~10k patient specimens and ~50k karyograms from over 5 years from the
Fred Hutchinson Cancer Center, we created a labeled set of images representing
individual chromosomes. These individual chromosomes were used to train and
assess deep learning models for classifying the 24 human chromosomes and
identifying chromosomal aberrations. The top-accuracy models utilized the
recently introduced Topological Vision Transformers (TopViTs) with
2-level-block-Toeplitz masking, to incorporate structural inductive bias.
TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome
identification, and exhibited accuracies >99% for aberration detection in most
aberrations. Notably, we were able to show high-quality performance even in
"few shot" learning scenarios. Incorporating the definition of clonality
substantially improved both precision and recall (sensitivity). When applied to
"zero shot" scenarios, the model captured aberrations without training, with
perfect precision at >50% recall. Together these results show that modern deep
learning models can approach expert-level performance for chromosome aberration
detection. To our knowledge, this is the first study demonstrating the
downstream effectiveness of TopViTs. These results open up exciting
opportunities for not only expediting patient results but providing a scalable
technology for early screening of low-abundance chromosomal lesions. |
Deep neuroevolution for limited, heterogeneous data: proof-of-concept application to Neuroblastoma brain metastasis using a small virtual pooled image collection | Artificial intelligence (AI) in radiology has made great strides in recent
years, but many hurdles remain. Overfitting and lack of generalizability
represent important ongoing challenges hindering accurate and dependable
clinical deployment. If AI algorithms can avoid overfitting and achieve true
generalizability, they can go from the research realm to the forefront of
clinical work. Recently, small data AI approaches such as deep neuroevolution
(DNE) have avoided overfitting small training sets. We seek to address both
overfitting and generalizability by applying DNE to a virtually pooled data set
consisting of images from various institutions. Our use case is classifying
neuroblastoma brain metastases on MRI. Neuroblastoma is well-suited for our
goals because it is a rare cancer. Hence, studying this pediatric disease
requires a small data approach. As a tertiary care center, the neuroblastoma
images in our local Picture Archiving and Communication System (PACS) are
largely from outside institutions. These multi-institutional images provide a
heterogeneous data set that can simulate real world clinical deployment. As in
prior DNE work, we used a small training set, consisting of 30 normal and 30
metastasis-containing post-contrast MRI brain scans, with 37% outside images.
The testing set was enriched with 83% outside images. DNE converged to a
testing set accuracy of 97%. Hence, the algorithm was able to predict image
class with near-perfect accuracy on a testing set that simulates real-world
data. Hence, the work described here represents a considerable contribution
toward clinically feasible AI. |
Realistic 3D printed imaging tumor phantoms for validation of image processing algorithms | Medical imaging phantoms are widely used for validation and verification of
imaging systems and algorithms in surgical guidance and radiation oncology
procedures. Especially, for the performance evaluation of new algorithms in the
field of medical imaging, manufactured phantoms need to replicate specific
properties of the human body, e.g., tissue morphology and radiological
properties. Additive manufacturing (AM) technology provides an inexpensive
opportunity for accurate anatomical replication with customization
capabilities. In this study, we proposed a simple and cheap protocol to
manufacture realistic tumor phantoms based on the filament 3D printing
technology. Tumor phantoms with both homogenous and heterogenous radiodensity
were fabricated. The radiodensity similarity between the printed tumor models
and real tumor data from CT images of lung cancer patients was evaluated.
Additionally, it was investigated whether a heterogeneity in the 3D printed
tumor phantoms as observed in the tumor patient data had an influence on the
validation of image registration algorithms. A density range between -217 to
226 HUs was achieved for 3D printed phantoms; this range of radiation
attenuation is also observed in the human lung tumor tissue. The resulted HU
range could serve as a lookup-table for researchers and phantom manufactures to
create realistic CT tumor phantoms with the desired range of radiodensities.
The 3D printed tumor phantoms also precisely replicated real lung tumor patient
data regarding morphology and could also include life-like heterogeneity of the
radiodensity inside the tumor models. An influence of the heterogeneity on
accuracy and robustness of the image registration algorithms was not found. |
Towards Transcervical Ultrasound Image Guidance for Transoral Robotic Surgery | Purpose: Trans-oral robotic surgery (TORS) using the da Vinci surgical robot
is a new minimally-invasive surgery method to treat oropharyngeal tumors, but
it is a challenging operation. Augmented reality (AR) based on intra-operative
ultrasound (US) has the potential to enhance the visualization of the anatomy
and cancerous tumors to provide additional tools for decision-making in
surgery. Methods: We propose and carry out preliminary evaluations of a
US-guided AR system for TORS, with the transducer placed on the neck for a
transcervical view. Firstly, we perform a novel MRI-transcervical 3D US
registration study. Secondly, we develop a US-robot calibration method with an
optical tracker and an AR system to display the anatomy mesh model in the
real-time endoscope images inside the surgeon console. Results: Our AR system
reaches a mean projection error of 26.81 and 27.85 pixels for the projection
from the US to stereo cameras in a water bath experiment. The average target
registration error for MRI to 3D US is 8.90 mm for the 3D US transducer and
5.85 mm for freehand 3D US, and the average distance between the vessel
centerlines is 2.32 mm. Conclusion: We demonstrate the first proof-of-concept
transcervical US-guided AR system for TORS and the feasibility of
trans-cervical 3D US-MRI registration. Our results show that trans-cervical 3D
US is a promising technique for TORS image guidance. |
Fluorescent property of carbon dots extracted from cigarette smoke and the application in bio-imaging | Cigarette smoke is one of the six major pollution sources in the room air. It
contains large number of particles with size less than 10 nm. There exist
carbon dots (CDs) in cigarette smoke which have strong fluorescence and with
good bio-compatibility and low toxicity. CDs in cigarette smoke can be applied
in bio-imaging which has great potential applications in the integration of
cancer diagnosis and treatment. In this paper, CDs were extracted from
cigarette smoke. Then, sodium borohydride was added to CDs aqueous solution for
reduction and the reduced CDs (R-CDs) were used for biological cell imaging.
The results indicate that the CDs with the particle size $<$10 nm in cigarette
smoke are self-assembled by the polymerizated polycyclic aromatic hydrocarbons
(PAHs) and ammonium nitrite which are disk nano-structure composed of
$sp^2$/$sp^3$ carbon and oxygen/nitrogen groups or polymers. Sodium borohydride
can reduce the carbonyl group on the surface of CDs to hydroxyl group and
increase the ratio of the Na 1s ratio of the CDs from 1.86 to 7.42. The CDs can
emit blue fluorescence under ultraviolet irradiation. After reduction, the
R-CDS have the intensity of fluorescence 7.2 times than before and the
fluorescence quantum yield increase from 6.13\% to 8.86\%. The
photoluminescence (PL) wavelength of R-CDS have red-shift of 7 nm which was due
to the increasing of Na element ratio. The onion epidermal cells labeled with
R-CDs show that the CDs could pass through the cell wall into the cell and
reach the nucleus. The cell wall and the nucleus could be clearly visualized.
CDs also shows low toxicity to human bronchial epithelial cells (BEAS-2B) with
good biological activity. The obtained results indicate that the CDs and R-CDs
have good fluorescent property which could be used as bio-imaging agent. |
Nearby voids and their galaxies: recent progress and prospects | Voids occupy about 3/4 of the volume of the Universe and contain about 15% of
its mass. Due to various observational selection effects, these structure
elements and galaxies populating voids, are highly under-explored. This
especially relates to the lowest mass galaxies which comprise the main void
population. Studying the nearby voids allows us to improve our understanding of
the most elusive void objects. We present the brief overview of the current
status and the prospects of the study of the nearest voids and their galaxies.
First, we summarize the pioneer study of a hundred galaxies residing in the
nearby Lynx-Cancer void which clearly evidence for the slower evolution of void
galaxies and finds also the unusual very metal-poor and gas-rich dwarfs. Then
we describe the recently defined sample of the nearby voids within the sphere
with R = 25 Mpc and a sample of 1350 galaxies residing in these voids (~20% of
all galaxies within this volume). We discuss the current results obtained for
several directions of the study of this sample. They include: the search for
Very Young Galaxies, the study of HI properties, the clustering of void
galaxies and its relation to the void substructures, and the unbiased study of
260 void galaxies within the Local Volume (R < 11 Mpc). Altogether, this opens
a perspective way to address the suggested peculiarities of the void galaxy
formation and evolution. Finally, we briefly overview the expected advancements
in the void galaxy studies related to the upcoming new facilities. |
Explainable AI for Bioinformatics: Methods, Tools, and Applications | Artificial intelligence (AI) systems utilizing deep neural networks (DNNs)
and machine learning (ML) algorithms are widely used for solving important
problems in bioinformatics, biomedical informatics, and precision medicine.
However, complex DNNs or ML models, which are often perceived as opaque and
black-box, can make it difficult to understand the reasoning behind their
decisions. This lack of transparency can be a challenge for both end-users and
decision-makers, as well as AI developers. Additionally, in sensitive areas
like healthcare, explainability and accountability are not only desirable but
also legally required for AI systems that can have a significant impact on
human lives. Fairness is another growing concern, as algorithmic decisions
should not show bias or discrimination towards certain groups or individuals
based on sensitive attributes. Explainable artificial intelligence (XAI) aims
to overcome the opaqueness of black-box models and provide transparency in how
AI systems make decisions. Interpretable ML models can explain how they make
predictions and the factors that influence their outcomes. However, most
state-of-the-art interpretable ML methods are domain-agnostic and evolved from
fields like computer vision, automated reasoning, or statistics, making direct
application to bioinformatics problems challenging without customization and
domain-specific adaptation. In this paper, we discuss the importance of
explainability in the context of bioinformatics, provide an overview of
model-specific and model-agnostic interpretable ML methods and tools, and
outline their potential caveats and drawbacks. Besides, we discuss how to
customize existing interpretable ML methods for bioinformatics problems.
Nevertheless, we demonstrate how XAI methods can improve transparency through
case studies in bioimaging, cancer genomics, and text mining. |
Protein Co-Enrichment Analysis of Extracellular Vesicles | Extracellular Vesicles (EVs) carry cell-derived proteins that confer
functionality and selective cell uptake. However, whether proteins are packaged
stochastically or co-enriched within individual EVs, and whether co-enrichment
fluctuates under homeostasis and disease, has not been measured. EV abundance
and protein global relative expression have been qualified by bulk analysis.
Meanwhile, co-enrichment is not directly accessible via bulk measurement and
has not been reported for single EV analysis. Here, we introduce the normalized
index of co-enrichment (NICE) to measure protein co-enrichment. NICE was
derived by (i) capturing EVs based on the expression of a membrane-bound
protein, (ii) probing for the co-expression of a second protein at the
population level - EV integrity underwrites the detection of single EV
co-expression without the need to resolve single EVs - and (iii) normalizing
measured values using two universal normalization probes. Axiomatically, NICE =
1 for stochastic inclusion or no overall co-enrichment, while for positive and
negative co-enrichment NICE > 1 or < 1, respectively. We quantified the NICE of
tetraspanins, growth factor receptors and integrins in EVs of eight breast
cancer cell lines of varying metastatic potential and organotropism,
combinatorially mapping up to 104 protein pairs. Our analysis revealed protein
enrichment and co-expression patterns consistent with previous findings. For
the organotropic cell lines, most protein pairs were co-enriched on EVs, with
the majority of NICE values between 0.2 to 11.5, and extending from 0.037 to
80.4. Median NICE were either negative, neutral or positive depending on the
cells. NICE analysis is easily multiplexed and is compatible with microarrays,
bead-based and single EV assays. Additional studies are needed to deepen our
understanding of the potential and significance of NICE for research and
clinical uses. |
Multi-domain stain normalization for digital pathology: A cycle-consistent adversarial network for whole slide images | The variation in histologic staining between different medical centers is one
of the most profound challenges in the field of computer-aided diagnosis. The
appearance disparity of pathological whole slide images causes algorithms to
become less reliable, which in turn impedes the wide-spread applicability of
downstream tasks like cancer diagnosis. Furthermore, different stainings lead
to biases in the training which in case of domain shifts negatively affect the
test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a
multi-domain approach to stain normalization based on CycleGAN. Our
modifications to CycleGAN allow us to normalize images of different origins
without retraining or using different models. We perform an extensive
evaluation of our method using various metrics and compare it to commonly used
methods that are multi-domain capable. First, we evaluate how well our method
fools a domain classifier that tries to assign a medical center to an image.
Then, we test our normalization on the tumor classification performance of a
downstream classifier. Furthermore, we evaluate the image quality of the
normalized images using the Structural similarity index and the ability to
reduce the domain shift using the Fr\'echet inception distance. We show that
our method proves to be multi-domain capable, provides the highest image
quality among the compared methods, and can most reliably fool the domain
classifier while keeping the tumor classifier performance high. By reducing the
domain influence, biases in the data can be removed on the one hand and the
origin of the whole slide image can be disguised on the other, thus enhancing
patient data privacy. |
On the Feasibility of Machine Learning Augmented Magnetic Resonance for Point-of-Care Identification of Disease | Early detection of many life-threatening diseases (e.g., prostate and breast
cancer) within at-risk population can improve clinical outcomes and reduce cost
of care. While numerous disease-specific "screening" tests that are closer to
Point-of-Care (POC) are in use for this task, their low specificity results in
unnecessary biopsies, leading to avoidable patient trauma and wasteful
healthcare spending. On the other hand, despite the high accuracy of Magnetic
Resonance (MR) imaging in disease diagnosis, it is not used as a POC disease
identification tool because of poor accessibility. The root cause of poor
accessibility of MR stems from the requirement to reconstruct high-fidelity
images, as it necessitates a lengthy and complex process of acquiring large
quantities of high-quality k-space measurements. In this study we explore the
feasibility of an ML-augmented MR pipeline that directly infers the disease
sidestepping the image reconstruction process. We hypothesise that the disease
classification task can be solved using a very small tailored subset of k-space
data, compared to image reconstruction. Towards that end, we propose a method
that performs two tasks: 1) identifies a subset of the k-space that maximizes
disease identification accuracy, and 2) infers the disease directly using the
identified k-space subset, bypassing the image reconstruction step. We validate
our hypothesis by measuring the performance of the proposed system across
multiple diseases and anatomies. We show that comparable performance to
image-based classifiers, trained on images reconstructed with full k-space
data, can be achieved using small quantities of data: 8% of the data for
detecting multiple abnormalities in prostate and brain scans, and 5% of the
data for knee abnormalities. To better understand the proposed approach and
instigate future research, we provide an extensive analysis and release code. |
BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations | Methods to detect malignant lesions from screening mammograms are usually
trained with fully annotated datasets, where images are labelled with the
localisation and classification of cancerous lesions. However, real-world
screening mammogram datasets commonly have a subset that is fully annotated and
another subset that is weakly annotated with just the global classification
(i.e., without lesion localisation). Given the large size of such datasets,
researchers usually face a dilemma with the weakly annotated subset: to not use
it or to fully annotate it. The first option will reduce detection accuracy
because it does not use the whole dataset, and the second option is too
expensive given that the annotation needs to be done by expert radiologists. In
this paper, we propose a middle-ground solution for the dilemma, which is to
formulate the training as a weakly- and semi-supervised learning problem that
we refer to as malignant breast lesion detection with incomplete annotations.
To address this problem, our new method comprises two stages, namely: 1)
pre-training a multi-view mammogram classifier with weak supervision from the
whole dataset, and 2) extending the trained classifier to become a multi-view
detector that is trained with semi-supervised student-teacher learning, where
the training set contains fully and weakly-annotated mammograms. We provide
extensive detection results on two real-world screening mammogram datasets
containing incomplete annotations, and show that our proposed approach achieves
state-of-the-art results in the detection of malignant breast lesions with
incomplete annotations. |
IMPORTANT-Net: Integrated MRI Multi-Parameter Reinforcement Fusion Generator with Attention Network for Synthesizing Absent Data | Magnetic resonance imaging (MRI) is highly sensitive for lesion detection in
the breasts. Sequences obtained with different settings can capture the
specific characteristics of lesions. Such multi-parameter MRI information has
been shown to improve radiologist performance in lesion classification, as well
as improving the performance of artificial intelligence models in various
tasks. However, obtaining multi-parameter MRI makes the examination costly in
both financial and time perspectives, and there may be safety concerns for
special populations, thus making acquisition of the full spectrum of MRI
sequences less durable. In this study, different than naive input fusion or
feature concatenation from existing MRI parameters, a novel
$\textbf{I}$ntegrated MRI $\textbf{M}$ulti-$\textbf{P}$arameter
reinf$\textbf{O}$rcement fusion generato$\textbf{R}$ wi$\textbf{T}$h
$\textbf{A}$tte$\textbf{NT}$ion Network (IMPORTANT-Net) is developed to
generate missing parameters. First, the parameter reconstruction module is used
to encode and restore the existing MRI parameters to obtain the corresponding
latent representation information at any scale level. Then the multi-parameter
fusion with attention module enables the interaction of the encoded information
from different parameters through a set of algorithmic strategies, and applies
different weights to the information through the attention mechanism after
information fusion to obtain refined representation information. Finally, a
reinforcement fusion scheme embedded in a $V^{-}$-shape generation module is
used to combine the hierarchical representations to generate the missing MRI
parameter. Results showed that our IMPORTANT-Net is capable of generating
missing MRI parameters and outperforms comparable state-of-the-art networks.
Our code is available at
https://github.com/Netherlands-Cancer-Institute/MRI_IMPORTANT_NET. |
Probabilistic Attention based on Gaussian Processes for Deep Multiple Instance Learning | Multiple Instance Learning (MIL) is a weakly supervised learning paradigm
that is becoming increasingly popular because it requires less labeling effort
than fully supervised methods. This is especially interesting for areas where
the creation of large annotated datasets remains challenging, as in medicine.
Although recent deep learning MIL approaches have obtained state-of-the-art
results, they are fully deterministic and do not provide uncertainty
estimations for the predictions. In this work, we introduce the Attention
Gaussian Process (AGP) model, a novel probabilistic attention mechanism based
on Gaussian Processes for deep MIL. AGP provides accurate bag-level predictions
as well as instance-level explainability, and can be trained end-to-end.
Moreover, its probabilistic nature guarantees robustness to overfitting on
small datasets and uncertainty estimations for the predictions. The latter is
especially important in medical applications, where decisions have a direct
impact on the patient's health. The proposed model is validated experimentally
as follows. First, its behavior is illustrated in two synthetic MIL experiments
based on the well-known MNIST and CIFAR-10 datasets, respectively. Then, it is
evaluated in three different real-world cancer detection experiments. AGP
outperforms state-of-the-art MIL approaches, including deterministic deep
learning ones. It shows a strong performance even on a small dataset with less
than 100 labels and generalizes better than competing methods on an external
test set. Moreover, we experimentally show that predictive uncertainty
correlates with the risk of wrong predictions, and therefore it is a good
indicator of reliability in practice. Our code is publicly available. |
A network-based biomarkers discovery of Cold/Hot ZHENG chronic gastritis and Cold/Hot herbs of formulae | Objective: To discover biomarkers and uncover the mechanism of Cold/Hot ZHENG
(syndrome in traditional Chinese medicine) chronic gastritis (CG) and Cold/Hot
herbs in traditional Chinese medicine (TCM) formulae on systematic biology.
Background: CG is a common inflammatory disease and the diagnosis of CG in TCM
can be classified into Cold ZHENG (Asthenic Cold) and Hot ZHENG (Excess Hot).
However, the molecular features of Cold/Hot ZHENG in CG and the mechanism of
Cold/Hot herbs in formulae for CG remained unclear. Methods: Based on data of
35 patients of Cold/Hot ZHENG CG and 3 scRNA-seq CG samples, we conduct
analysis with transcriptomics datasets and algorithms, to discover biomarkers
for Cold/Hot ZHENG CG. And we collected 25 formulae (with traditional effects
related to Cold/Hot ZHENG) for CG and corresponding 89 Cold/Hot herbs
(including Warm/Cool herbs) to discover features and construct target networks
of Cold/Hot herbs on the basis of network target and enrichment analysis.
Results: Biomarkers of Cold/Hot ZHENG CG represented by CCL2 and LEP suggested
that Hot ZHENG CG might be characterized by over-inflammation and exuberant
metabolism, and Cold ZHENG CG showed a trend of suppression in immune
regulation and energy metabolism. And biomarkers of Cold/Hot ZHENG showed also
significant changes in the progression of gastric cancer. And biomarkers and
pathways of Hot herbs intend to regulate immune responses and energy
metabolism, while those of Cold herbs were likely to participate in
anti-inflammation effect. Conclusion: In this study, we found that the
biomarkers and mechanism of Cold/Hot ZHENG CG and those of Cold/Hot herbs were
closely related to the regulation of immune and metabolisms. These findings may
reflect the mechanism, build bridges between multiple views of Cold/Hot ZHENG
and Cold/Hot herbs, and provide a research paradigm for further achieving
precision TCM. |
Target Specific De Novo Design of Drug Candidate Molecules with Graph Transformer-based Generative Adversarial Networks | Discovering novel drug candidate molecules is one of the most fundamental and
critical steps in drug development. Generative deep learning models, which
create synthetic data given a probability distribution, have been developed
with the purpose of picking completely new samples from a partially known
space. Generative models offer high potential for designing de novo molecules;
however, in order for them to be useful in real-life drug development
pipelines, these models should be able to design target-specific molecules,
which is the next step in this field. In this study, we propose DrugGEN, for
the de novo design of drug candidate molecules that interact with selected
target proteins. The proposed system represents compounds and protein
structures as graphs and processes them via serially connected two generative
adversarial networks comprising graph transformers. DrugGEN is trained using a
large dataset of compounds from ChEMBL and target-specific bioactive molecules,
to design effective and specific inhibitory molecules against the AKT1 protein,
which has critical importance for developing treatments against various types
of cancer. On fundamental benchmarks, DrugGEN models have either competitive or
better performance against other methods. To assess the target-specific
generation performance, we conducted further in silico analysis with molecular
docking and deep learning-based bioactivity prediction. Results indicate that
de novo molecules have high potential for interacting with the AKT1 protein
structure in the level of its native ligand. DrugGEN can be used to design
completely novel and effective target-specific drug candidate molecules for any
druggable protein, given target features and a dataset of experimental
bioactivities. Code base, datasets, results and trained models of DrugGEN are
available at https://github.com/HUBioDataLab/DrugGEN |
3D PETCT Tumor Lesion Segmentation via GCN Refinement | Whole-body PET/CT scan is an important tool for diagnosing various
malignancies (e.g., malignant melanoma, lymphoma, or lung cancer), and accurate
segmentation of tumors is a key part for subsequent treatment. In recent years,
CNN-based segmentation methods have been extensively investigated. However,
these methods often give inaccurate segmentation results, such as
over-segmentation and under-segmentation. Therefore, to address such issues, we
propose a post-processing method based on a graph convolutional neural network
(GCN) to refine inaccurate segmentation parts and improve the overall
segmentation accuracy. Firstly, nnUNet is used as an initial segmentation
framework, and the uncertainty in the segmentation results is analyzed.
Certainty and uncertainty nodes establish the nodes of a graph neural network.
Each node and its 6 neighbors form an edge, and 32 nodes are randomly selected
for uncertain nodes to form edges. The highly uncertain nodes are taken as the
subsequent refinement targets. Secondly, the nnUNet result of the certainty
nodes is used as label to form a semi-supervised graph network problem, and the
uncertainty part is optimized through training the GCN network to improve the
segmentation performance. This describes our proposed nnUNet-GCN segmentation
framework. We perform tumor segmentation experiments on the PET/CT dataset in
the MICCIA2022 autoPET challenge. Among them, 30 cases are randomly selected
for testing, and the experimental results show that the false positive rate is
effectively reduced with nnUNet-GCN refinement. In quantitative analysis, there
is an improvement of 2.12 % on the average Dice score, 6.34 on 95 % Hausdorff
Distance (HD95), and 1.72 on average symmetric surface distance (ASSD). The
quantitative and qualitative evaluation results show that GCN post-processing
methods can effectively improve tumor segmentation performance. |
Designing and simulating realistic spatial frequency domain imaging systems using open-source 3D rendering software | Spatial frequency domain imaging (SFDI) is a low-cost imaging technique that
can deliver real-time maps of absorption and reduced scattering coefficients.
However, there are a wide range of imaging geometries that practical SFDI
systems must cope with including imaging flat samples ex vivo, imaging inside
tubular lumen in vivo such as in an endoscopy, and measuring tumours or polyps
of varying shapes, sizes and optical properties. There is a need for a design
and simulation tool to accelerate design and fabrication of new SFDI systems.
We present such a system implemented using open-source 3D design and
ray-tracing software Blender that is capable of simulating media with realistic
optical properties (mimicking healthy and cancerous tissue), a wide variety of
shapes and size, and in both planar and tubular imaging geometries. We first
demonstrate quantitative agreement between Monte-Carlo simulated scattering and
absorption coefficients and those measured from our Blender system. Next, we
show the ability of the system to simulate absorption, scattering and shape for
flat samples with small simulated tumours and show that the improved contrast
associated with SFDI is reproduced. Finally, to demonstrate the versatility of
the system as a design tool we show that it can be used to generate a custom
look-up-table for mapping from modulation amplitude values to absorption and
scattering values in a tubular geometry, simulating a lumen. As a demonstrative
example we show that longitudinal sectioning of the tube, with separate look-up
tables for each section, significantly improves accuracy of SFDI, representing
an important design insight for future systems. We therefore anticipate our
simulation system will significantly aid in the design and development of novel
SFDI systems, especially as such systems are miniaturised for deployment in
endoscopic and laparoscopic systems. |
The Race of mRNA therapy: Evidence from Patent Landscape | mRNA therapy is gaining worldwide attention as an emerging therapeutic
approach. The widespread use of mRNA vaccines during the COVID-19 outbreak has
demonstrated the potential of mRNA therapy. As mRNA-based drugs have expanded
and their indications have broadened, more patents for mRNA innovations have
emerged. The global patent landscape for mRNA therapy has not yet been
analyzed, indicating a research gap in need of filling, from new technology to
productization. This study uses social network analysis with the patent quality
assessment to investigate the temporal trends, citation relationship, and
significant litigation for 16,101 mRNA therapy patents and summarizes the hot
topics and potential future directions for this industry. The information
obtained in this study not only may be utilized as a tool of knowledge for
researchers in a comprehensive and integrated way but can also provide
inspiration for efficient production methods for mRNA drugs. This study shows
that infectious diseases and cancer are currently the primary applications for
mRNA drugs. Emerging patent activity and lawsuits in this field are
demonstrating that delivery technology remains one of the key challenges in the
field and that drug-targeting research in combination with vector technology
will be one of the major directions for the industry going forward. With
significant funding, new organizations have developed novel delivery
technologies in an attempt to break into the patent thicket established by
companies such as Arbutus. The global mRNA therapeutic landscape is undergoing
a multifaceted development pattern, and the monopoly of giant companies is
being challenged. |
OpenTPS -- Open-source treatment planning system for research in proton therapy | Introduction. Treatment planning systems (TPS) are an essential component for
simulating and optimizing a radiation therapy treatment before administering it
to the patient. It ensures that the tumor is well covered and the dose to the
healthy tissues is minimized. However, the TPS provided by commercial companies
often come with a large panel of tools, each implemented in the form of a
black-box making it difficult for researchers to use them for implementing and
testing new ideas. To address this issue, we have developed an open-source TPS.
Approach. We have developed an open-source software platform, OpenTPS
(opentps.org), to generate treatment plans for external beam radiation therapy,
and in particular for proton therapy. It is designed to be a flexible and
user-friendly platform (coded with the freely usable Python language) that can
be used by medical physicists, radiation oncologists, and other members of the
radiation therapy community to create customized treatment plans for
educational and research purposes. Result. OpenTPS includes a range of tools
and features that can be used to analyze patient anatomy, simulate the delivery
of the radiation beam, and optimize the treatment plan to achieve the desired
dose distribution. It can be used to create treatment plans for a variety of
cancer types and was designed to be extended to other treatment modalities.
Significance. A new open-source treatment planning system has been built for
research in proton therapy. Its flexibility allows an easy integration of new
techniques and customization of treatment plans. It is freely available for use
and is regularly updated and supported by a community of users and developers
who contribute to the ongoing development and improvement of the software. |
Evolutionary Computation in Action: Feature Selection for Deep Embedding Spaces of Gigapixel Pathology Images | One of the main obstacles of adopting digital pathology is the challenge of
efficient processing of hyperdimensional digitized biopsy samples, called whole
slide images (WSIs). Exploiting deep learning and introducing compact WSI
representations are urgently needed to accelerate image analysis and facilitate
the visualization and interpretability of pathology results in a postpandemic
world. In this paper, we introduce a new evolutionary approach for WSI
representation based on large-scale multi-objective optimization (LSMOP) of
deep embeddings. We start with patch-based sampling to feed KimiaNet , a
histopathology-specialized deep network, and to extract a multitude of feature
vectors. Coarse multi-objective feature selection uses the reduced search space
strategy guided by the classification accuracy and the number of features. In
the second stage, the frequent features histogram (FFH), a novel WSI
representation, is constructed by multiple runs of coarse LSMOP. Fine
evolutionary feature selection is then applied to find a compact (short-length)
feature vector based on the FFH and contributes to a more robust deep-learning
approach to digital pathology supported by the stochastic power of evolutionary
algorithms. We validate the proposed schemes using The Cancer Genome Atlas
(TCGA) images in terms of WSI representation, classification accuracy, and
feature quality. Furthermore, a novel decision space for multicriteria decision
making in the LSMOP field is introduced. Finally, a patch-level visualization
approach is proposed to increase the interpretability of deep features. The
proposed evolutionary algorithm finds a very compact feature vector to
represent a WSI (almost 14,000 times smaller than the original feature vectors)
with 8% higher accuracy compared to the codes provided by the state-of-the-art
methods. |
TransNetR: Transformer-based Residual Network for Polyp Segmentation with Multi-Center Out-of-Distribution Testing | Colonoscopy is considered the most effective screening test to detect
colorectal cancer (CRC) and its precursor lesions, i.e., polyps. However, the
procedure experiences high miss rates due to polyp heterogeneity and
inter-observer dependency. Hence, several deep learning powered systems have
been proposed considering the criticality of polyp detection and segmentation
in clinical practices. Despite achieving improved outcomes, the existing
automated approaches are inefficient in attaining real-time processing speed.
Moreover, they suffer from a significant performance drop when evaluated on
inter-patient data, especially those collected from different centers.
Therefore, we intend to develop a novel real-time deep learning based
architecture, Transformer based Residual network (TransNetR), for colon polyp
segmentation and evaluate its diagnostic performance. The proposed
architecture, TransNetR, is an encoder-decoder network that consists of a
pre-trained ResNet50 as the encoder, three decoder blocks, and an upsampling
layer at the end of the network. TransNetR obtains a high dice coefficient of
0.8706 and a mean Intersection over union of 0.8016 and retains a real-time
processing speed of 54.60 on the Kvasir-SEG dataset. Apart from this, the major
contribution of the work lies in exploring the generalizability of the
TransNetR by testing the proposed algorithm on the out-of-distribution (test
distribution is unknown and different from training distribution) dataset. As a
use case, we tested our proposed algorithm on the PolypGen (6 unique centers)
dataset and two other popular polyp segmentation benchmarking datasets. We
obtained state-of-the-art performance on all three datasets during
out-of-distribution testing. The source code of TransNetR will be made publicly
available at https://github.com/DebeshJha. |
End-to-end Deformable Attention Graph Neural Network for Single-view Liver Mesh Reconstruction | Intensity modulated radiotherapy (IMRT) is one of the most common modalities
for treating cancer patients. One of the biggest challenges is precise
treatment delivery that accounts for varying motion patterns originating from
free-breathing. Currently, image-guided solutions for IMRT is limited to 2D
guidance due to the complexity of 3D tracking solutions. We propose a novel
end-to-end attention graph neural network model that generates in real-time a
triangular shape of the liver based on a reference segmentation obtained at the
preoperative phase and a 2D MRI coronal slice taken during the treatment. Graph
neural networks work directly with graph data and can capture hidden patterns
in non-Euclidean domains. Furthermore, contrary to existing methods, it
produces the shape entirely in a mesh structure and correctly infers mesh shape
and position based on a surrogate image. We define two on-the-fly approaches to
make the correspondence of liver mesh vertices with 2D images obtained during
treatment. Furthermore, we introduce a novel task-specific identity loss to
constrain the deformation of the liver in the graph neural network to limit
phenomenons such as flying vertices or mesh holes. The proposed method achieves
results with an average error of 3.06 +- 0.7 mm and Chamfer distance with L2
norm of 63.14 +- 27.28. |
Molecular Identifification, Antioxidant Effifficacy of Phenolic Compounds, and Antimicrobial Activity of Beta-Carotene Isolated from Fruiting Bodies of Suillus sp | Suillus species, in general, are edible mushrooms, and environmentally
important that are associated mostly with pine trees in the tropics regions.
These fungi considered a remarkable source of phenolic compounds that play a
crucial role as antioxidants which may reduce the risk of most human chronic
diseases such as cancer, diabetes, asthma, atherosclerosis, Alzheimer, and
others. On the other hand, carotenoids (\b{eta} carotene) are the most popular
natural pigments which play an important role to protect the plants from
photo-oxidative reactions. In human, these compounds prevent oxidative stress
and expects to have antimicrobial activity. Here, the phenolic compounds were
extracted with Ethyl acetate from fruiting bodies of Suillus sp and analyzed by
HPLC, the antioxidant activity (reducing power%) of phenolic compounds was
determined at the concentrations of 1, 2.5, and 5 mg/mL. Antimicrobial activity
of \b{eta} carotene pigment was measured at a concentration of 100 mg/mL
against some human pathogenic bacteria such as Escherichia coli, Pseudomonas
aeruginosa, Klebsiella pneumonia, and Staphylococcus aureus. The specific DNA
region ITS was amplified and sequenced using ITS1 and ITS4 primers with some
bioinformatics analyses. The phenolic extract isolated from fruiting bodies of
Suillus sp showed a remarkable antioxidant activity by increasing the reducing
power percent (from F+3 ions to F+2 ions) comparing with the industrial
antioxidant (Propyl gallate) at all used concentrations. Percent of reducing
power of phenolic compounds were 75.5, 84.9 and 95.7% at concentrations of 1,
2.5, and 5 mg/mL respectively; comparing with PG were 65.9, 81.3, and 93.3 at
1, 2.5, and 5 mg/mL respectively. The \b{eta} carotene pigment revealed a
significant antimicrobial activity at a concentration of 100 mg/mL against K.
pneumonia, E. coli, and S. aureus. |
Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential | The large language model called ChatGPT has drawn extensively attention
because of its human-like expression and reasoning abilities. In this study, we
investigate the feasibility of using ChatGPT in experiments on using ChatGPT to
translate radiology reports into plain language for patients and healthcare
providers so that they are educated for improved healthcare. Radiology reports
from 62 low-dose chest CT lung cancer screening scans and 76 brain MRI
metastases screening scans were collected in the first half of February for
this study. According to the evaluation by radiologists, ChatGPT can
successfully translate radiology reports into plain language with an average
score of 4.27 in the five-point system with 0.08 places of information missing
and 0.07 places of misinformation. In terms of the suggestions provided by
ChatGPT, they are general relevant such as keeping following-up with doctors
and closely monitoring any symptoms, and for about 37% of 138 cases in total
ChatGPT offers specific suggestions based on findings in the report. ChatGPT
also presents some randomness in its responses with occasionally
over-simplified or neglected information, which can be mitigated using a more
detailed prompt. Furthermore, ChatGPT results are compared with a newly
released large model GPT-4, showing that GPT-4 can significantly improve the
quality of translated reports. Our results show that it is feasible to utilize
large language models in clinical education, and further efforts are needed to
address limitations and maximize their potential. |
A Data Augmentation Method and the Embedding Mechanism for Detection and Classification of Pulmonary Nodules on Small Samples | Detection of pulmonary nodules by CT is used for screening lung cancer in
early stages.omputer aided diagnosis (CAD) based on deep-learning method can
identify the suspected areas of pulmonary nodules in CT images, thus improving
the accuracy and efficiency of CT diagnosis. The accuracy and robustness of
deep learning models. Method:In this paper, we explore (1) the data
augmentation method based on the generation model and (2) the model structure
improvement method based on the embedding mechanism. Two strategies have been
introduced in this study: a new data augmentation method and a embedding
mechanism. In the augmentation method, a 3D pixel-level statistics algorithm is
proposed to generate pulmonary nodule and by combing the faked pulmonary nodule
and healthy lung, we generate new pulmonary nodule samples. The embedding
mechanism are designed to better understand the meaning of pixels of the
pulmonary nodule samples by introducing hidden variables. Result: The result of
the 3DVNET model with the augmentation method for pulmonary nodule detection
shows that the proposed data augmentation method outperforms the method based
on generative adversarial network (GAN) framework, training accuracy improved
by 1.5%, and with embedding mechanism for pulmonary nodules classification
shows that the embedding mechanism improves the accuracy and robustness for the
classification of pulmonary nodules obviously, the model training accuracy is
close to 1 and the model testing F1-score is 0.90.Conclusion:he proposed data
augmentation method and embedding mechanism are beneficial to improve the
accuracy and robustness of the model, and can be further applied in other
common diagnostic imaging tasks. |
Medical diffusion on a budget: textual inversion for medical image generation | Diffusion-based models for text-to-image generation have gained immense
popularity due to recent advancements in efficiency, accessibility, and
quality. Although it is becoming increasingly feasible to perform inference
with these systems using consumer-grade GPUs, training them from scratch still
requires access to large datasets and significant computational resources. In
the case of medical image generation, the availability of large, publicly
accessible datasets that include text reports is limited due to legal and
ethical concerns. While training a diffusion model on a private dataset may
address this issue, it is not always feasible for institutions lacking the
necessary computational resources. This work demonstrates that pre-trained
Stable Diffusion models, originally trained on natural images, can be adapted
to various medical imaging modalities by training text embeddings with textual
inversion. In this study, we conducted experiments using medical datasets
comprising only 100 samples from three medical modalities. Embeddings were
trained in a matter of hours, while still retaining diagnostic relevance in
image generation. Experiments were designed to achieve several objectives.
Firstly, we fine-tuned the training and inference processes of textual
inversion, revealing that larger embeddings and more examples are required.
Secondly, we validated our approach by demonstrating a 2\% increase in the
diagnostic accuracy (AUC) for detecting prostate cancer on MRI, which is a
challenging multi-modal imaging modality, from 0.78 to 0.80. Thirdly, we
performed simulations by interpolating between healthy and diseased states,
combining multiple pathologies, and inpainting to show embedding flexibility
and control of disease appearance. Finally, the embeddings trained in this
study are small (less than 1 MB), which facilitates easy sharing of medical
data with reduced privacy concerns. |
How to design a MAMS-ROCI (aka DURATIONS) randomised trial: the REFINE-Lung case study | Background. The DURATIONS design has been recently proposed as a practical
alternative to a standard two-arm non-inferiority design when the goal is to
optimise some continuous aspect of treatment administration, e.g. duration or
frequency, preserving efficacy but improving on secondary outcomes such as
safety, costs or convenience. The main features of this design are that (i) it
randomises patients to a moderate number of arms across the continuum and (ii)
it uses a model to share information across arms. While papers published to
date about the design have focused on analysis aspects, here we show how to
design such a trial in practice. We use the REFINE-Lung trial as an example;
this is a trial seeking the optimal frequency of immunotherapy treatment for
non-small cell lung cancer patients. Because the aspect of treatment
administration to optimise is frequency, rather than duration, we propose to
rename the design as Multi-Arm Multi-Stage Response Over Continuous
Intervention (MAMS-ROCI). Methods. We show how simulations can be used to
design such a trial. We propose to use the ADEMP framework to plan such
simulations, clearly specifying aims, data generating mechanisms, estimands,
methods and performance measures before coding and analysing the simulations.
We discuss the possible choices to be made using the REFINE-Lung trial as an
example. Results. We describe all the choices made while designing the
REFINE-Lung trial, and the results of the simulations performed. We justify our
choice of total sample size based on these results. Conclusions. MAMS-ROCI
trials can be designed using simulation studies that have to be carefully
planned and conducted. REFINE-Lung has been designed using such an approach and
we have shown how researchers could similarly design their own MAMS-ROCI trial. |
Deep-Learning-based Fast and Accurate 3D CT Deformable Image Registration in Lung Cancer | Purpose: In some proton therapy facilities, patient alignment relies on two
2D orthogonal kV images, taken at fixed, oblique angles, as no 3D on-the-bed
imaging is available. The visibility of the tumor in kV images is limited since
the patient's 3D anatomy is projected onto a 2D plane, especially when the
tumor is behind high-density structures such as bones. This can lead to large
patient setup errors. A solution is to reconstruct the 3D CT image from the kV
images obtained at the treatment isocenter in the treatment position.
Methods: An asymmetric autoencoder-like network built with vision-transformer
blocks was developed. The data was collected from 1 head and neck patient: 2
orthogonal kV images (1024x1024 voxels), 1 3D CT with padding (512x512x512)
acquired from the in-room CT-on-rails before kVs were taken and 2
digitally-reconstructed-radiograph (DRR) images (512x512) based on the CT. We
resampled kV images every 8 voxels and DRR and CT every 4 voxels, thus formed a
dataset consisting of 262,144 samples, in which the images have a dimension of
128 for each direction. In training, both kV and DRR images were utilized, and
the encoder was encouraged to learn the jointed feature map from both kV and
DRR images. In testing, only independent kV images were used. The full-size
synthetic CT (sCT) was achieved by concatenating the sCTs generated by the
model according to their spatial information. The image quality of the
synthetic CT (sCT) was evaluated using mean absolute error (MAE) and
per-voxel-absolute-CT-number-difference volume histogram (CDVH).
Results: The model achieved a speed of 2.1s and a MAE of <40HU. The CDVH
showed that <5% of the voxels had a per-voxel-absolute-CT-number-difference
larger than 185 HU.
Conclusion: A patient-specific vision-transformer-based network was developed
and shown to be accurate and efficient to reconstruct 3D CT images from kV
images. |
Benchmarking ChatGPT-4 on ACR Radiation Oncology In-Training (TXIT) Exam and Red Journal Gray Zone Cases: Potentials and Challenges for AI-Assisted Medical Education and Decision Making in Radiation Oncology | The potential of large language models in medicine for education and decision
making purposes has been demonstrated as they achieve decent scores on medical
exams such as the United States Medical Licensing Exam (USMLE) and the MedQA
exam. In this work, we evaluate the performance of ChatGPT-4 in the specialized
field of radiation oncology using the 38th American College of Radiology (ACR)
radiation oncology in-training (TXIT) exam and the 2022 Red Journal gray zone
cases. For the TXIT exam, ChatGPT-3.5 and ChatGPT-4 have achieved the scores of
63.65% and 74.57%, respectively, highlighting the advantage of the latest
ChatGPT-4 model. Based on the TXIT exam, ChatGPT-4's strong and weak areas in
radiation oncology are identified to some extent. Specifically, ChatGPT-4
demonstrates good knowledge of statistics, CNS & eye, pediatrics, biology, and
physics but has limitations in bone & soft tissue and gynecology, as per the
ACR knowledge domain. Regarding clinical care paths, ChatGPT-4 performs well in
diagnosis, prognosis, and toxicity but lacks proficiency in topics related to
brachytherapy and dosimetry, as well as in-depth questions from clinical
trials. For the gray zone cases, ChatGPT-4 is able to suggest a personalized
treatment approach to each case with high correctness and comprehensiveness.
Most importantly, it provides novel treatment aspects for many cases, which are
not suggested by any human experts. Both evaluations demonstrate the potential
of ChatGPT-4 in medical education for the general public and cancer patients,
as well as the potential to aid clinical decision-making, while acknowledging
its limitations in certain domains. Because of the risk of hallucination, facts
provided by ChatGPT always need to be verified. |
Prediction of brain tumor recurrence location based on multi-modal fusion and nonlinear correlation learning | Brain tumor is one of the leading causes of cancer death. The high-grade
brain tumors are easier to recurrent even after standard treatment. Therefore,
developing a method to predict brain tumor recurrence location plays an
important role in the treatment planning and it can potentially prolong
patient's survival time. There is still little work to deal with this issue. In
this paper, we present a deep learning-based brain tumor recurrence location
prediction network. Since the dataset is usually small, we propose to use
transfer learning to improve the prediction. We first train a multi-modal brain
tumor segmentation network on the public dataset BraTS 2021. Then, the
pre-trained encoder is transferred to our private dataset for extracting the
rich semantic features. Following that, a multi-scale multi-channel feature
fusion model and a nonlinear correlation learning module are developed to learn
the effective features. The correlation between multi-channel features is
modeled by a nonlinear equation. To measure the similarity between the
distributions of original features of one modality and the estimated correlated
features of another modality, we propose to use Kullback-Leibler divergence.
Based on this divergence, a correlation loss function is designed to maximize
the similarity between the two feature distributions. Finally, two decoders are
constructed to jointly segment the present brain tumor and predict its future
tumor recurrence location. To the best of our knowledge, this is the first work
that can segment the present tumor and at the same time predict future tumor
recurrence location, making the treatment planning more efficient and precise.
The experimental results demonstrated the effectiveness of our proposed method
to predict the brain tumor recurrence location from the limited dataset. |
Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video | The purpose of this study is to develop a computer-aided diagnosis system for
classifying benign and malignant lung lesions, and to assist physicians in
real-time analysis of radial probe endobronchial ultrasound (EBUS) videos.
During the biopsy process of lung cancer, physicians use real-time ultrasound
images to find suitable lesion locations for sampling. However, most of these
images are difficult to classify and contain a lot of noise. Previous studies
have employed 2D convolutional neural networks to effectively differentiate
between benign and malignant lung lesions, but doctors still need to manually
select good-quality images, which can result in additional labor costs. In
addition, the 2D neural network has no ability to capture the temporal
information of the ultrasound video, so it is difficult to obtain the
relationship between the features of the continuous images. This study designs
an automatic diagnosis system based on a 3D neural network, uses the SlowFast
architecture as the backbone to fuse temporal and spatial features, and uses
the SwAV method of contrastive learning to enhance the noise robustness of the
model. The method we propose includes the following advantages, such as (1)
using clinical ultrasound films as model input, thereby reducing the need for
high-quality image selection by physicians, (2) high-accuracy classification of
benign and malignant lung lesions can assist doctors in clinical diagnosis and
reduce the time and risk of surgery, and (3) the capability to classify well
even in the presence of significant image noise. The AUC, accuracy, precision,
recall and specificity of our proposed method on the validation set reached
0.87, 83.87%, 86.96%, 90.91% and 66.67%, respectively. The results have
verified the importance of incorporating temporal information and the
effectiveness of using the method of contrastive learning on feature
extraction. |
eXplainable Artificial Intelligence on Medical Images: A Survey | Over the last few years, the number of works about deep learning applied to
the medical field has increased enormously. The necessity of a rigorous
assessment of these models is required to explain these results to all people
involved in medical exams. A recent field in the machine learning area is
explainable artificial intelligence, also known as XAI, which targets to
explain the results of such black box models to permit the desired assessment.
This survey analyses several recent studies in the XAI field applied to medical
diagnosis research, allowing some explainability of the machine learning
results in several different diseases, such as cancers and COVID-19. |
On-line Dose Calculation Using Deep Learning for Beams Selection in Non-Coplanar Radiotherapy | Non-coplanar Intensity-Modulated Radiation Therapy (IMRT) goes a step further
by orienting the gantry carrying the radiation beam and the patient couch in a
non-coplanar manner to accurately target the cancer region and better avoid
organs-at-risk. The use of a non-coplanar treatment trajectory significantly
enhances the degree of freedom and flexibility but increases drastically the
complexity of the optimization. In inverse planning optimization the dose
contribution for all potential beam directions is usually pre-calculates and
pre-loads into the Treatment Planning System (TPS). The size the dose matrix
becomes more critical when moving from coplanar IMRT to non-coplanar IMRT since
the number of beams increases drastically. A solution would be to calculate
"on-the-fly" the dose contribution to each new candidate beam during
optimization. This is only possible if a dose calculation engine is fast enough
to be used online during optimization iterations, which is not the case in
standard method. Therefore, in this work we propose an IMRT optimization scheme
using deep learning based dose engine to compute the dose matrix on-line. The
proposed deep learning approach will be combined into a
simulated-annealing-based optimization method for non-coplanar IMRT. Since the
dose engine will compute the dose contribution on-line during the optimization,
the final main optimization method requires to keep in memory a very
lightweight dose matrix. The proposed method was compared with clinical data
showing a good agreement considering dosimetry of the treatment plans. The main
advantage of the proposed method was the reduction of the memory storage from
9GB to 10MB during the optimization process. |
AdaMSS: Adaptive Multi-Modality Segmentation-to-Survival Learning for Survival Outcome Prediction from PET/CT Images | Survival prediction is a major concern for cancer management. Deep survival
models based on deep learning have been widely adopted to perform end-to-end
survival prediction from medical images. Recent deep survival models achieved
promising performance by jointly performing tumor segmentation with survival
prediction, where the models were guided to extract tumor-related information
through Multi-Task Learning (MTL). However, these deep survival models have
difficulties in exploring out-of-tumor prognostic information. In addition,
existing deep survival models are unable to effectively leverage multi-modality
images. Empirically-designed fusion strategies were commonly adopted to fuse
multi-modality information via task-specific manually-designed networks, thus
limiting the adaptability to different scenarios. In this study, we propose an
Adaptive Multi-modality Segmentation-to-Survival model (AdaMSS) for survival
prediction from PET/CT images. Instead of adopting MTL, we propose a novel
Segmentation-to-Survival Learning (SSL) strategy, where our AdaMSS is trained
for tumor segmentation and survival prediction sequentially in two stages. This
strategy enables the AdaMSS to focus on tumor regions in the first stage and
gradually expand its focus to include other prognosis-related regions in the
second stage. We also propose a data-driven strategy to fuse multi-modality
information, which realizes adaptive optimization of fusion strategies based on
training data during training. With the SSL and data-driven fusion strategies,
our AdaMSS is designed as an adaptive model that can self-adapt its focus
regions and fusion strategy for different training stages. Extensive
experiments with two large clinical datasets show that our AdaMSS outperforms
state-of-the-art survival prediction methods. |
JulianA: An automatic treatment planning platform for intensity-modulated proton therapy | Creating high quality treatment plans is crucial for a successful
radiotherapy treatment. However, it demands substantial effort and special
training for dosimetrists. Existing automated treatment planning systems
typically require either an explicit prioritization of planning objectives,
human-assigned objective weights, large amounts of historic plans to train an
artificial intelligence or long planning times. Many of the existing
auto-planning tools are difficult to extend to new planning goals.
A new spot weight optimisation algorithm, called JulianA, was developed. The
algorithm minimises a scalar loss function that is built only based on the
prescribed dose to the tumour and organs at risk (OARs), but does not rely on
historic plans. The objective weights in the loss function have default values
that do not need to be changed for the patients in our dataset. The system is a
versatile tool for researchers and clinicians without specialised programming
skills. Extending it is as easy as adding an additional term to the loss
function. JulianA was validated on a dataset of 19 patients with intra- and
extracerebral neoplasms within the cranial region that had been treated at our
institute. For each patient, a reference plan which was delivered to the cancer
patient, was exported from our treatment database. Then JulianA created the
auto plan using the same beam arrangement. The reference and auto plans were
given to a blinded independent reviewer who assessed the acceptability of each
plan, ranked the plans and assigned the human-/machine-made labels.
The auto plans were considered acceptable in 16 out of 19 patients and at
least as good as the reference plan for 11 patients. Whether a plan was crafted
by a dosimetrist or JulianA was only recognised for 9 cases. The median time
for the spot weight optimisation is approx. 2 min (range: 0.5 min - 7 min). |
At-Admission Prediction of Mortality and Pulmonary Embolism in COVID-19 Patients Using Statistical and Machine Learning Methods: An International Cohort Study | By September, 2022, more than 600 million cases of SARS-CoV-2 infection have
been reported globally, resulting in over 6.5 million deaths. COVID-19
mortality risk estimators are often, however, developed with small
unrepresentative samples and with methodological limitations. It is highly
important to develop predictive tools for pulmonary embolism (PE) in COVID-19
patients as one of the most severe preventable complications of COVID-19. Using
a dataset of more than 800,000 COVID-19 patients from an international cohort,
we propose a cost-sensitive gradient-boosted machine learning model that
predicts occurrence of PE and death at admission. Logistic regression, Cox
proportional hazards models, and Shapley values were used to identify key
predictors for PE and death. Our prediction model had a test AUROC of 75.9% and
74.2%, and sensitivities of 67.5% and 72.7% for PE and all-cause mortality
respectively on a highly diverse and held-out test set. The PE prediction model
was also evaluated on patients in UK and Spain separately with test results of
74.5% AUROC, 63.5% sensitivity and 78.9% AUROC, 95.7% sensitivity. Age, sex,
region of admission, comorbidities (chronic cardiac and pulmonary disease,
dementia, diabetes, hypertension, cancer, obesity, smoking), and symptoms (any,
confusion, chest pain, fatigue, headache, fever, muscle or joint pain,
shortness of breath) were the most important clinical predictors at admission.
Our machine learning model developed from an international cohort can serve to
better regulate hospital risk prioritisation of at-risk patients. |
A Scintillator Beam Monitor for Real-Time FLASH Radiotherapy | FLASH Radiotherapy (RT) is a potentially new cancer radiotherapy technique
where an entire therapeutic dose is delivered in about 0.1 s and at ~1000 times
higher dose rate than in conventional RT. For clinical trials to be conducted
safely, precise and fast beam monitoring that can generate an out-of-tolerance
beam interrupt is required. A FLASH Beam Scintillator Monitor (FBSM) is being
developed based in part on two novel proprietary scintillator materials: an
organic polymeric material (PM) and inorganic hybrid (HM). The FBSM provides
large area coverage, low mass profile, linear response over a broad dynamic
range, radiation tolerance, and real-time analysis IEC-compliant fast
beam-interrupt signal. This paper includes the design concept and test results
from prototype devices in radiation beams that include heavy ions, low energy
protons at nA currents, FLASH level dose per pulse electron beams, and in a
hospital radiotherapy clinic with electron beams. Results include image
quality, response linearity, radiation hardness, spatial resolution, and
real-time data processing. PM and HM scintillator exhibited no measurable drop
in signal after a cumulative dose of 9 kGy and 20 kGy respectively. HM showed a
small -0.02%/kGy signal decrease after a 212 kGy cumulative dose resulting from
continuous exposure for 15 minutes at a high FLASH dose rate of 234 Gy/s. These
tests established the linear response of the FBSM with respect to beam
currents, dose per pulse, and material thickness. Comparison with commercial
Gafchromic film indicates that the FBSM produces a high resolution 2D beam
image and can reproduce a nearly identical beam profile, including primary beam
tails. At 20 kfps or 50 microsec/frame, the real-time FPGA based computation
and analysis of beam position, beam shape, and beam dose takes < 1 microsec. |
AlGaN/AlN Stranski-Krastanov quantum dots for highly efficient electron beam pumped emitters: The role of miniaturization and composition to attain far UV-C emission | Conventional ultraviolet (UV) lamps for disinfection emit radiation in the
255-270 nm range, which poses a high risk of causing cancer and cataracts. To
address these concerns, solid-state far UV-C sources emitting below 240 nm are
gaining attention as a safe and sustainable disinfection solution for occupied
spaces. Here, we delve into the extension of the AlxGa1-xN/AlN quantum dot (QD)
technology towards the far UV-C range, which presents various challenges
associated with the reduction of the lattice mismatch and band offset when Al
is incorporated in the QDs. We explore the structural and optical impact of
increasing the Al content through the increase of the Al flux and eventual
correction of the Ga flux to maintain a constant metal/N ratio. We also examine
the impact of extreme miniaturization of the QDs, achieved through a reduction
of their growth time, on the spectral behavior and internal quantum efficiency
(IQE). The high Al content results in QDs with a reduced aspect ratio
(height/diameter) and thicker wetting layer when compared to the GaN/AlN
system. Self-assembled QDs grown with a metal/N ratio ranging from 0.5 to 0.8
show an IQE around 50%, independent of the Al content (up to 65%) or emission
wavelength (300-230 nm). However, samples emitting at wavelengths below 270 nm
exhibit a bimodal luminescence associated with inhomogeneous in-plane emission
attributed to fluctuations of the QD shape associated with extended defects.
Reducing the QD size exacerbates the bimodality without reducing the emission
wavelength. The power efficiencies under electron beam pumping range from 0.4%
to 1%, with clear potential for improvement through surface treatments that
enhance light extraction efficiency. |
Joint regional uptake quantification of Thorium-227 and Radium-223 using a multiple-energy-window projection-domain quantitative SPECT method | Thorium-227-based alpha-particle radiopharmaceutical therapies (alpha-RPTs)
are currently being investigated in several clinical and pre-clinical studies.
After administration, Thorium-227 decays to Radium-223, another
alpha-particle-emitting isotope, which redistributes within the patient.
Reliable dose quantification of both Thorium-227 and Radium-223 is clinically
important, and SPECT can perform this quantification as these isotopes also
emit gamma-ray photons. However, reliable quantification is challenging for
several reasons: the orders-of-magnitude lower activity compared to
conventional SPECT, resulting in a very low number of detected counts, the
presence of multiple photopeaks and substantial overlap in the emission spectra
of these isotopes. To address these issues, we propose a multiple-energy-window
projection-domain quantification (MEW-PDQ) method that jointly estimates the
regional activity uptake of both Thorium-227 and Radium-223 directly using the
SPECT projection data from multiple energy windows. We evaluated the method
with realistic simulation studies conducted with anthropomorphic digital
phantoms, including a virtual imaging trial in the context of imaging patients
with bone metastases of prostate cancer who were treated with Thorium-227-based
alpha-RPTs. The proposed method yielded reliable regional uptake estimates of
both isotopes and outperformed state-of-art methods across different lesion
sizes, contrasts, and varying levels of intra-lesion heterogeneity. This
superior performance was also observed in the virtual imaging trial.
Additionally, the variance of the estimated uptake approached the Cram\'er-Rao
lower bound-defined theoretical limit. These results provide strong evidence in
support of this method for reliable uptake quantification in Thorium-227-based
alpha-RPTs. |
An Investigation into the Effects of Pre-training Data Distributions for Pathology Report Classification | Pre-trained transformer models have demonstrated success across many natural
language processing (NLP) tasks. In applying these models to the clinical
domain, a prevailing assumption is that pre-training language models from
scratch on large-scale biomedical data results in substantial improvements. We
test this assumption with 4 pathology classification tasks on a corpus of 2907
prostate cancer pathology reports. We evaluate 5 transformer pre-trained models
that are the same size but differ in pre-training corpora. Specifically, we
analyze 3 categories of models: 1)General-domain: BERT and Turing Natural
Language Representation (TNLR) models, which use general corpora for
pre-training, 2)Mixed-domain: BioBERT which is obtained from BERT by including
PubMed abstracts in pre-training and Clinical BioBERT which additionally
includes MIMIC-III clinical notes and 3)Domain-specific: PubMedBERT which is
pre-trained from scratch on PubMed abstracts. We find the mixed-domain and
domain-specific models exhibit faster feature disambiguation during
fine-tuning. However, the domain-specific model, PubMedBERT, can overfit to
minority classes when presented with class imbalance, a common scenario in
pathology report data. At the same time, the mixed-domain models are more
resistant to overfitting. Our findings indicate that the use of general natural
language and domain-specific corpora in pre-training serve complementary
purposes for pathology report classification. The first enables resistance to
overfitting when fine-tuning on an imbalanced dataset while the second allows
for more accurate modelling of the fine-tuning domain. An expert evaluation is
also conducted to reveal common outlier modes of each model. Our results could
inform better fine-tuning practices in the clinical domain, to possibly
leverage the benefits of mixed-domain models for imbalanced downstream
datasets. |
A Comparison of Mutation and Amplification-Driven Resistance Mechanisms and Their Impacts on Tumor Recurrence | Tumor recurrence, driven by the evolution of drug resistance is a major
barrier to therapeutic success in cancer. Resistance is often caused by genetic
alterations such as point mutation, which refers to the modification of a
single genomic base pair, or gene amplification, which refers to the
duplication of a region of DNA that contains a gene. Here we investigate the
dependence of tumor recurrence dynamics on these mechanisms of resistance,
using stochastic multi-type branching process models. We derive tumor
extinction probabilities and deterministic estimates for the tumor recurrence
time, defined as the time when an initially drug sensitive tumor surpasses its
original size after developing resistance. For models of amplification-driven
and mutation-driven resistance, we prove law of large numbers results regarding
the convergence of the stochastic recurrence times to their mean. Additionally,
we prove sufficient and necessary conditions for a tumor to escape extinction
under the gene amplification model, discuss behavior under biologically
relevant parameters, and compare the recurrence time and tumor composition in
the mutation and amplification models both analytically and using simulations.
In comparing these mechanisms, we find that the ratio between recurrence times
driven by amplification vs. mutation depends linearly on the number of
amplification events required to acquire the same degree of resistance as a
mutation event, and we find that the relative frequency of amplification and
mutation events plays a key role in determining the mechanism under which
recurrence is more rapid. In the amplification-driven resistance model, we also
observe that increasing drug concentration leads to a stronger initial
reduction in tumor burden, but that the eventual recurrent tumor population is
less heterogeneous, more aggressive, and harbors higher levels of
drug-resistance. |
Lensless polarimetric coded ptychography (pol-CP) for high-resolution, high-throughput birefringence imaging on a chip | Polarimetric imaging provides valuable insights into the polarization state
of light interacting with a sample. It can infer crucial birefringence
properties of bio-specimens without using any labels, thereby facilitating the
diagnosis of diseases such as cancer and osteoarthritis. In this study, we
introduce a novel polarimetric coded ptychography (pol-CP) approach that
enables high-resolution, high-throughput birefringence imaging on a chip. Our
platform deviates from traditional lens-based polarization systems by employing
an integrated polarimetric coded sensor for lensless diffraction data
acquisition. Utilizing Jones calculus, we quantitatively determine the
birefringence retardance and orientation information of bio-specimens from four
recovered intensity images. Our portable pol-CP prototype can resolve the
435-nm linewidth on the resolution target and the imaging field of view for a
single acquisition is limited only by the detector size of 41 mm^2. The
prototype allows for the acquisition of gigapixel birefringence images with a
180-mm^2 field of view in ~3.5 minutes, achieving an imaging throughput
comparable to that of a conventional whole slide scanner. To demonstrate its
biomedical applications, we perform high-throughput imaging of malaria-infected
blood smears, locating parasites using birefringence contrast. We also generate
birefringence maps of label-free thyroid smears to identify thyroid follicles.
Notably, the recovered birefringence maps emphasize the same regions as
autofluorescence images, indicating the potential for rapid on-site evaluation
of label-free biopsies. The reported approach offers a portable, turnkey
solution for high-resolution, high-throughput polarimetric analysis without
using lenses, with potential applications in disease diagnosis, sample
screening, and label-free chemical imaging. |
3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation | Despite that the segment anything model (SAM) achieved impressive results on
general-purpose semantic segmentation with strong generalization ability on
daily images, its demonstrated performance on medical image segmentation is
less precise and not stable, especially when dealing with tumor segmentation
tasks that involve objects of small sizes, irregular shapes, and low contrast.
Notably, the original SAM architecture is designed for 2D natural images,
therefore would not be able to extract the 3D spatial information from
volumetric medical data effectively. In this paper, we propose a novel
adaptation method for transferring SAM from 2D to 3D for promptable medical
image segmentation. Through a holistically designed scheme for architecture
modification, we transfer the SAM to support volumetric inputs while retaining
the majority of its pre-trained parameters for reuse. The fine-tuning process
is conducted in a parameter-efficient manner, wherein most of the pre-trained
parameters remain frozen, and only a few lightweight spatial adapters are
introduced and tuned. Regardless of the domain gap between natural and medical
data and the disparity in the spatial arrangement between 2D and 3D, the
transformer trained on natural images can effectively capture the spatial
patterns present in volumetric medical images with only lightweight
adaptations. We conduct experiments on four open-source tumor segmentation
datasets, and with a single click prompt, our model can outperform domain
state-of-the-art medical image segmentation models on 3 out of 4 tasks,
specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor,
colon cancer segmentation, and achieve similar performance for liver tumor
segmentation. We also compare our adaptation method with existing popular
adapters, and observed significant performance improvement on most datasets. |
A two-sample comparison of mean survival times of uncured sub-populations | Comparing the survival times among two groups is a common problem in
time-to-event analysis, for example if one would like to understand whether one
medical treatment is superior to another. In the standard survival analysis
setting, there has been a lot of discussion on how to quantify such difference
and what can be an intuitive, easily interpretable, summary measure. In the
presence of subjects that are immune to the event of interest (`cured'), we
illustrate that it is not appropriate to just compare the overall survival
functions. Instead, it is more informative to compare the cure fractions and
the survival of the uncured sub-populations separately from each other. Our
research is mainly driven by the question: if the cure fraction is similar for
two available treatments, how else can we determine which is preferable? To
this end, we estimate the mean survival times in the uncured fractions of both
treatment groups ($MST_u$) and develop permutation tests for inference. In the
first out of two connected papers, we focus on nonparametric approaches. The
methods are illustrated with medical data of leukemia patients. In Part II we
adjust the mean survival time of the uncured for potential confounders, which
is crucial in observational settings. For each group, we employ the widely used
logistic-Cox mixture cure model and estimate the $MST_u$ conditionally on a
given covariate value. An asymptotic and a permutation-based approach have been
developed for making inference on the difference of conditional $MST_u$'s
between two groups. Contrarily to available results in the literature, in the
simulation study we do not observe a clear advantage of the permutation method
over the asymptotic one to justify its increased computational cost. The
methods are illustrated through a practical application to breast cancer data. |
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers | Acute Lymphoblastic Leukemia (ALL) is one of the most common types of
childhood blood cancer. The quick start of the treatment process is critical to
saving the patient's life, and for this reason, early diagnosis of this disease
is essential. Examining the blood smear images of these patients is one of the
methods used by expert doctors to diagnose this disease. Deep learning-based
methods have numerous applications in medical fields, as they have
significantly advanced in recent years. ALL diagnosis is not an exception in
this field, and several machine learning-based methods for this problem have
been proposed. In previous methods, high diagnostic accuracy was reported, but
our work showed that this alone is not sufficient, as it can lead to models
taking shortcuts and not making meaningful decisions. This issue arises due to
the small size of medical training datasets. To address this, we constrained
our model to follow a pipeline inspired by experts' work. We also demonstrated
that, since a judgement based on only one image is insufficient, redefining the
problem as a multiple-instance learning problem is necessary for achieving a
practical result. Our model is the first to provide a solution to this problem
in a multiple-instance learning setup. We introduced a novel pipeline for
diagnosing ALL that approximates the process used by hematologists, is
sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an
F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL
IDB 1. Our method was further evaluated on an out-of-distribution dataset,
which posed a challenging test and had acceptable performance. Notably, our
model was trained on a relatively small dataset, highlighting the potential for
our approach to be applied to other medical datasets with limited data
availability. |
Adaptive Region Selection for Active Learning in Whole Slide Image Semantic Segmentation | The process of annotating histological gigapixel-sized whole slide images
(WSIs) at the pixel level for the purpose of training a supervised segmentation
model is time-consuming. Region-based active learning (AL) involves training
the model on a limited number of annotated image regions instead of requesting
annotations of the entire images. These annotation regions are iteratively
selected, with the goal of optimizing model performance while minimizing the
annotated area. The standard method for region selection evaluates the
informativeness of all square regions of a specified size and then selects a
specific quantity of the most informative regions. We find that the efficiency
of this method highly depends on the choice of AL step size (i.e., the
combination of region size and the number of selected regions per WSI), and a
suboptimal AL step size can result in redundant annotation requests or inflated
computation costs. This paper introduces a novel technique for selecting
annotation regions adaptively, mitigating the reliance on this AL
hyperparameter. Specifically, we dynamically determine each region by first
identifying an informative area and then detecting its optimal bounding box, as
opposed to selecting regions of a uniform predefined shape and size as in the
standard method. We evaluate our method using the task of breast cancer
metastases segmentation on the public CAMELYON16 dataset and show that it
consistently achieves higher sampling efficiency than the standard method
across various AL step sizes. With only 2.6\% of tissue area annotated, we
achieve full annotation performance and thereby substantially reduce the costs
of annotating a WSI dataset. The source code is available at
https://github.com/DeepMicroscopy/AdaptiveRegionSelection. |
Measurement of the Neutron Radius of 208Pb Through Parity-Violation in Electron Scattering | We report the first measurement of the parity-violating asymmetry A_PV in the
elastic scattering of polarized electrons from 208Pb. A_PV is sensitive to the
radius of the neutron distribution (Rn). The result A_PV = 0.656 \pm 0.060
(stat) \pm 0.014 (syst) ppm corresponds to a difference between the radii of
the neutron and proton distributions Rn - Rp = 0.33 +0.16 -0.18 fm and provides
the first electroweak observation of the neutron skin which is expected in a
heavy, neutron-rich nucleus. |
DHCAL with Minimal Absorber: Measurements with Positrons | In special tests, the active layers of the CALICE Digital Hadron Calorimeter
prototype, the DHCAL, were exposed to low energy particle beams, without being
interleaved by absorber plates. The thickness of each layer corresponded
approximately to 0.29 radiation lengths or 0.034 nuclear interaction lengths,
defined mostly by the copper and steel skins of the detector cassettes. This
paper reports on measurements performed with this device in the Fermilab test
beam with positrons in the energy range of 1 to 10 GeV. The measurements are
compared to simulations based on GEANT4 and a standalone program to emulate the
detailed response of the active elements. |
Weakly Bound Neutron-Rich Nuclei and Cosmic Phenomena | The single particle and bulk properties of the neutron-rich nuclei constrain
fundamental issues in nuclear physics and nuclear astrophysics like the limits
of existence of quantum many body systems (atomic nuclei), the equation of
state of neutron-rich matter, neutron star, nucleosynthesis, evolution of
stars, neutron star merging etc.. The state of the art of Coulomb breakup of
the neutron-rich nuclei has been used to explore those properties. Unambiguous
information on detailed components of the ground-state wave-function along with
quantum numbers of the valence neutron of the nuclei have been obtained from
the measurement of threshold strength along with the $\gamma$-rays spectra of
the core following Coulomb breakup. The shape of this threshold strength is a
finger-print of the quantum numbers of the nucleon. We investigated the
ground-state properties of the neutron-rich Na, Mg, Al nuclei around N $\sim$
20 using this method at GSI, Darmstadt. Very clear evidence has been observed
for melting and merging of long cherished magic shell gaps at N = 20, 28. The
evanescent neutron-rich nuclei imprint their existence in stellar explosive
scenarios (r-process etc.). Coulomb dissociation (CD) is one of the important
indirect measurements of the capture cross-section which may provide valuable
input to the model for star evolution process, particularly the r-process.
Some valuable bulk properties of the neutron-rich nuclei like the density
dependent symmetry energy,neutron skin etc. play a key role in understanding
cosmic phenomena and these properties have been studied via electromagnetic
excitation. Preliminary results of electromagnetic excitation of the
neutron-rich nucleus, $^{32}$Mg are presented. |
Properties of slowly rotating asteroids from the Convex Inversion Thermophysical Model | Results from the TESS mission showed that previous studies strngly
underestimated the number of slow rotators, revealing the importance of
studying those asteroids. For most slowly rotating asteroids (P > 12), no spin
and shape model is available because of observation selection effects. This
hampers determination of their thermal parameters and accurate sizes.
We continue our campaign in minimising selection effects among main belt
asteroids. Our targets are slow rotators with low light-curve amplitudes. The
goal is to provide their scaled spin and shape models together with thermal
inertia, albedo, and surface roughness to complete the statistics. Rich
multi-apparition datasets of dense light curves are supplemented with data from
Kepler and TESS. In addition to data in the visible range, we also use thermal
data from infrared space observatories (IRAS, Akari and WISE) in a combined
optimisation process using the Convex Inversion Thermophysical Model (CITPM).
This novel method has so far been applied to only a few targets, and in this
work we further validate the method.
We present the models of 16 slow rotators. All provide good fits to both
thermal and visible data. The obtained sizes are on average accurate at the 5%
precision, with diameters in the range from 25 to 145 km. The rotation periods
of our targets range from 11 to 59 hours, and the thermal inertia covers a wide
range of values, from 2 to <400 SI units, not showing any correlation with the
period. With this work we increase the sample of slow rotators with reliable
spin and shape models and known thermal inertia by 40%. The thermal inertia
values of our sample do not display a previously suggested increasing trend
with rotation period, which might be due to their small skin depth. |
Precision Determination of the Neutral Weak Form Factor of $^{48}$Ca | We report a precise measurement of the parity-violating asymmetry $A_{\rm
PV}$ in the elastic scattering of longitudinally polarized electrons from
$^{48}{\rm Ca}$. We measure $A_{\rm PV} =2668\pm 106\ {\rm (stat)}\pm 40\ {\rm
(syst)}$ parts per billion, leading to an extraction of the neutral weak form
factor $F_{\rm W} (q=0.8733$ fm$^{-1}) = 0.1304 \pm 0.0052 \ {\rm (stat)}\pm
0.0020\ {\rm (syst)}$ and the charge minus the weak form factor $F_{\rm ch} -
F_{\rm W} = 0.0277\pm 0.0055$. The resulting neutron skin thickness
$R_n-R_p=0.121 \pm 0.026\ {\rm (exp)} \pm 0.024\ {\rm (model)}$~fm is
relatively thin yet consistent with many model calculations. The combined CREX
and PREX results will have implications for future energy density functional
calculations and on the density dependence of the symmetry energy of nuclear
matter. |
The Laser-hybrid Accelerator for Radiobiological Applications | The `Laser-hybrid Accelerator for Radiobiological Applications', LhARA, is
conceived as a novel, uniquely-flexible facility dedicated to the study of
radiobiology. The technologies demonstrated in LhARA, which have wide
application, will be developed to allow particle-beam therapy to be delivered
in a completely new regime, combining a variety of ion species in a single
treatment fraction and exploiting ultra-high dose rates. LhARA will be a hybrid
accelerator system in which laser interactions drive the creation of a large
flux of protons or light ions that are captured using a plasma (Gabor) lens and
formed into a beam. The laser-driven source allows protons and ions to be
captured at energies significantly above those that pertain in conventional
facilities, thus evading the current space-charge limit on the instantaneous
dose rate that can be delivered. The laser-hybrid approach, therefore, will
allow the vast ``terra incognita'' of the radiobiology that determines the
response of tissue to ionising radiation to be studied with protons and light
ions using a wide variety of time structures, spectral distributions, and
spatial configurations at instantaneous dose rates up to and significantly
beyond the ultra-high dose-rate `FLASH' regime.
It is proposed that LhARA be developed in two stages. In the first stage, a
programme of in vitro radiobiology will be served with proton beams with
energies between 10MeV and 15MeV. In stage two, the beam will be accelerated
using a fixed-field accelerator (FFA). This will allow experiments to be
carried out in vitro and in vivo with proton beam energies of up to 127MeV. In
addition, ion beams with energies up to 33.4MeV per nucleon will be available
for in vitro and in vivo experiments. This paper presents the conceptual design
for LhARA and the R&D programme by which the LhARA consortium seeks to
establish the facility. |
Nine Recommendations for Decision Aid Implementation from the Clinician Perspective | Background: Shared decision-making (SDM) aims to empower patients to take an
active role in their treatment choices, supported by clinicians and patient
decision aids (PDAs). The purpose of this study is to explore barriers and
possible facilitators to SDM and a PDA in the prostate cancer trajectory. In
the process we identify possible actions that organizations and individuals can
take to support implementation in practice.
Methods: We use the Ottawa Model of Research Use as a framework to determine
the barriers and facilitators to SDM and PDAs from the perspective of
clinicians. Semi-structured interviews were conducted with urologists (n=4),
radiation oncologists (n=3), and oncology nurses (n=2), focusing on the current
decision-making process experienced by these stakeholders. Questions included
their attitudes towards SDM and PDAs, barriers to implementation and possible
strategies to overcome them.
Results: Time pressure and patient characteristics were cited as major
barriers by 55% of the clinicians we interviewed. Structural factors such as
external quotas for certain treatment procedures were also considered as
barriers by 44% of the clinicians. Facilitating factors involved organizational
changes to em-bed PDAs in the treatment trajectory, training in using PDAs as a
tool for SDM, and clinician motivation by disseminating positive clinical
outcomes. Our findings also suggest a role for external stakeholders such as
healthcare insurers in creating economic incentives to facilitate
implementation.
Conclusion: Our findings highlight the importance of a multi-faceted
implementation strategy to support SDM. While clinician motivation and patient
activation are essential, structural/economic barriers may hamper
implementation. Action must also be taken at the administrative and policy
levels to foster a collaborative environment for SDM and, in the process, for
PDAs. |
Secondary radiation measurements for particle therapy applications: prompt photons produced by $^{4}$He, $^{12}$C and $^{16}$O ion beams in a PMMA target | Charged particle beams are used in Particle Therapy (PT) to treat oncological
patients due to their selective dose deposition in tissues and to their high
biological effect in killing cancer cells with respect to photons and electrons
used in conventional radiotherapy. Nowadays, protons and carbon ions are used
in PT clinical routine but, recently, the interest on the potential application
of helium and oxygen beams is growing due to their reduced multiple scattering
inside the body and increased linear energy transfer, relative biological
effectiveness and oxygen enhancement ratio. The precision of PT demands for
online dose monitoring techniques, crucial to improve the quality assurance of
treatments. The beam range confined in the irradiated target can be monitored
thanks to the neutral or charged secondary radiation emitted by the
interactions of hadron beams with matter. Prompt photons are produced by
nuclear de-excitation processes and, at present, different dose monitoring and
beam range verification techniques based on the prompt {\gamma} detection have
been proposed. It is hence of importance to perform the {\gamma} yield
measurement in therapeutical-like conditions. In this paper we report the
yields of prompt photons produced by the interaction of helium, carbon and
oxygen ion beams with a PMMA target. The measurements were performed at the
Heidelberg Ion-beam Therapy center (HIT) with beams of different energies. A
LYSO scintillator has been used as photon detector. The obtained {\gamma}
yields for $^{12}$C ion beams are compared with results from literature, while
no other results from $^{4}$He and $^{16}$O beams have been published yet. A
discussion on the expected resolution of a slit camera detector is presented,
demonstrating the feasibility of a prompt-{\gamma} based monitoring technique
for PT treatments using helium, carbon and oxygen ion beams. |
Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy | Over half a million individuals are diagnosed with head and neck cancer each
year worldwide. Radiotherapy is an important curative treatment for this
disease, but it requires manual time consuming delineation of radio-sensitive
organs at risk (OARs). This planning process can delay treatment, while also
introducing inter-operator variability with resulting downstream radiation dose
differences. While auto-segmentation algorithms offer a potentially time-saving
solution, the challenges in defining, quantifying and achieving expert
performance remain. Adopting a deep learning approach, we demonstrate a 3D
U-Net architecture that achieves expert-level performance in delineating 21
distinct head and neck OARs commonly segmented in clinical practice. The model
was trained on a dataset of 663 deidentified computed tomography (CT) scans
acquired in routine clinical practice and with both segmentations taken from
clinical practice and segmentations created by experienced radiographers as
part of this research, all in accordance with consensus OAR definitions. We
demonstrate the model's clinical applicability by assessing its performance on
a test set of 21 CT scans from clinical practice, each with the 21 OARs
segmented by two independent experts. We also introduce surface Dice similarity
coefficient (surface DSC), a new metric for the comparison of organ
delineation, to quantify deviation between OAR surface contours rather than
volumes, better reflecting the clinical task of correcting errors in the
automated organ segmentations. The model's generalisability is then
demonstrated on two distinct open source datasets, reflecting different centres
and countries to model training. With appropriate validation studies and
regulatory approvals, this system could improve the efficiency, consistency,
and safety of radiotherapy pathways. |
Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy | The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing
initiative to address eminent problems in developing reliable computer aided
detection and diagnosis endoscopy systems and suggest a pathway for clinical
translation of technologies. Whilst endoscopy is a widely used diagnostic and
treatment tool for hollow-organs, there are several core challenges often faced
by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their
visual interpretation, and 2) difficulty in identifying subtle precancerous
precursors and cancer abnormalities. Artefacts often affect the robustness of
deep learning methods applied to the gastrointestinal tract organs as they can
be confused with tissue of interest. EndoCV2020 challenges are designed to
address research questions in these remits. In this paper, we present a summary
of methods developed by the top 17 teams and provide an objective comparison of
state-of-the-art methods and methods designed by the participants for two
sub-challenges: i) artefact detection and segmentation (EAD2020), and ii)
disease detection and segmentation (EDD2020). Multi-center, multi-organ,
multi-class, and multi-modal clinical endoscopy datasets were compiled for both
EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of
detection algorithms was also evaluated. Whilst most teams focused on accuracy
improvements, only a few methods hold credibility for clinical usability. The
best performing teams provided solutions to tackle class imbalance, and
variabilities in size, origin, modality and occurrences by exploring data
augmentation, data fusion, and optimal class thresholding techniques. |
Consistency checks of results from a Monte Carlo code intercomparison for emitted electron spectra and energy deposition around a single gold nanoparticle irradiated by X-rays | Organized by the European Radiation Dosimetry Group (EURADOS), a Monte Carlo
code intercomparison exercise was conducted where participants simulated the
emitted electron spectra and energy deposition around a single gold
nanoparticle (GNP) irradiated by X-rays. In the exercise, the participants
scored energy imparted in concentric spherical shells around a spherical volume
filled with gold or water as well as the spectral distribution of electrons
leaving the GNP. Initially, only the ratio of energy deposition with and
without GNP was to be reported. During the evaluation of the exercise, however,
the data for energy deposition in the presence and absence of the GNP were also
requested. A GNP size of 50 nm and 100 nm diameter was considered as well as
two different X-ray spectra (50 kVp and 100kVp). This introduced a redundancy
that can be used to cross-validate the internal consistency of the simulation
results. In this work, evaluation of the reported results is presented in terms
of integral quantities that can be benchmarked against values obtained from
physical properties of the radiation spectra and materials involved. The impact
of different interaction cross-section datasets and their implementation in the
different Monte Carlo codes is also discussed. |
Prediction of the Position of External Markers Using a Recurrent Neural Network Trained With Unbiased Online Recurrent Optimization for Safe Lung Cancer Radiotherapy | During lung radiotherapy, the position of infrared reflective objects on the
chest can be recorded to estimate the tumor location. However, radiotherapy
systems have a latency inherent to robot control limitations that impedes the
radiation delivery precision. Prediction with online learning of recurrent
neural networks (RNN) allows for adaptation to non-stationary respiratory
signals, but classical methods such as RTRL and truncated BPTT are respectively
slow and biased. This study investigates the capabilities of unbiased online
recurrent optimization (UORO) to forecast respiratory motion and enhance safety
in lung radiotherapy.
We used 9 observation records of the 3D position of 3 external markers on the
chest and abdomen of healthy individuals breathing during intervals from 73s to
222s. The sampling frequency was 10Hz, and the amplitudes of the recorded
trajectories range from 6mm to 40mm in the superior-inferior direction. We
forecast the 3D location of each marker simultaneously with a horizon value
between 0.1s and 2.0s, using an RNN trained with UORO. We compare its
performance with an RNN trained with RTRL, LMS, and offline linear regression.
We provide closed-form expressions for quantities involved in the loss gradient
calculation in UORO, thereby making its implementation efficient. Training and
cross-validation were performed during the first minute of each sequence.
On average over the horizon values considered and the 9 sequences, UORO
achieves the lowest root-mean-square (RMS) error and maximum error among the
compared algorithms. These errors are respectively equal to 1.3mm and 8.8mm,
and the prediction time per time step was lower than 2.8ms (Dell Intel core
i9-9900K 3.60 GHz). Linear regression has the lowest RMS error for the horizon
values 0.1s and 0.2s, followed by LMS for horizon values between 0.3s and 0.5s,
and UORO for horizon values greater than 0.6s. |
Bayesian calibration of simulation models: A tutorial and an Australian smoking behaviour model | Simulation models of epidemiological, biological, ecological, and
environmental processes are increasingly being calibrated using Bayesian
statistics. The Bayesian approach provides simple rules to synthesise multiple
data sources and to calculate uncertainty in model output due to uncertainty in
the calibration data. As the number of tutorials and studies published grow,
the solutions to common difficulties in Bayesian calibration across these
fields have become more apparent, and a step-by-step process for successful
calibration across all these fields is emerging. We provide a statement of the
key steps in a Bayesian calibration, and we outline analyses and approaches to
each step that have emerged from one or more of these applied sciences. Thus we
present a synthesis of Bayesian calibration methodologies that cut across a
number of scientific disciplines.
To demonstrate these steps and to provide further detail on the computations
involved in Bayesian calibration, we calibrated a compartmental model of
tobacco smoking behaviour in Australia. We found that the proportion of a birth
cohort estimated to take up smoking before they reach age 20 years in 2016 was
at its lowest value since the early 20th century, and that quit rates were at
their highest. As a novel outcome, we quantified the rate that ex-smokers
switched to reporting as a 'never smoker' when surveyed later in life; a
phenomenon that, to our knowledge, has never been quantified using
cross-sectional survey data. |
OpenKBP-Opt: An international and reproducible evaluation of 76 knowledge-based planning pipelines | We establish an open framework for developing plan optimization models for
knowledge-based planning (KBP) in radiotherapy. Our framework includes
reference plans for 100 patients with head-and-neck cancer and high-quality
dose predictions from 19 KBP models that were developed by different research
groups during the OpenKBP Grand Challenge. The dose predictions were input to
four optimization models to form 76 unique KBP pipelines that generated 7600
plans. The predictions and plans were compared to the reference plans via: dose
score, which is the average mean absolute voxel-by-voxel difference in dose a
model achieved; the deviation in dose-volume histogram (DVH) criterion; and the
frequency of clinical planning criteria satisfaction. We also performed a
theoretical investigation to justify our dose mimicking models. The range in
rank order correlation of the dose score between predictions and their KBP
pipelines was 0.50 to 0.62, which indicates that the quality of the predictions
is generally positively correlated with the quality of the plans. Additionally,
compared to the input predictions, the KBP-generated plans performed
significantly better (P<0.05; one-sided Wilcoxon test) on 18 of 23 DVH
criteria. Similarly, each optimization model generated plans that satisfied a
higher percentage of criteria than the reference plans. Lastly, our theoretical
investigation demonstrated that the dose mimicking models generated plans that
are also optimal for a conventional planning model. This was the largest
international effort to date for evaluating the combination of KBP prediction
and optimization models. In the interest of reproducibility, our data and code
is freely available at https://github.com/ababier/open-kbp-opt. |
The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs) | Pediatric tumors of the central nervous system are the most common cause of
cancer-related death in children. The five-year survival rate for high-grade
gliomas in children is less than 20\%. Due to their rarity, the diagnosis of
these entities is often delayed, their treatment is mainly based on historic
treatment concepts, and clinical trials require multi-institutional
collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a
landmark community benchmark event with a successful history of 12 years of
resource creation for the segmentation and analysis of adult glioma. Here we
present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which
represents the first BraTS challenge focused on pediatric brain tumors with
data acquired across multiple international consortia dedicated to pediatric
neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on
benchmarking the development of volumentric segmentation algorithms for
pediatric brain glioma through standardized quantitative performance evaluation
metrics utilized across the BraTS 2023 cluster of challenges. Models gaining
knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training
data will be evaluated on separate validation and unseen test mpMRI dataof
high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023
challenge brings together clinicians and AI/imaging scientists to lead to
faster development of automated segmentation techniques that could benefit
clinical trials, and ultimately the care of children with brain tumors. |
The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa) | Gliomas are the most common type of primary brain tumors. Although gliomas
are relatively rare, they are among the deadliest types of cancer, with a
survival rate of less than 2 years after diagnosis. Gliomas are challenging to
diagnose, hard to treat and inherently resistant to conventional therapy. Years
of extensive research to improve diagnosis and treatment of gliomas have
decreased mortality rates across the Global North, while chances of survival
among individuals in low- and middle-income countries (LMICs) remain unchanged
and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term
survival with glioma is associated with the identification of appropriate
pathological features on brain MRI and confirmation by histopathology. Since
2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated
state-of-the-art machine learning methods to detect, characterize, and classify
gliomas. However, it is unclear if the state-of-the-art methods can be widely
implemented in SSA given the extensive use of lower-quality MRI technology,
which produces poor image contrast and resolution and more importantly, the
propensity for late presentation of disease at advanced stages as well as the
unique characteristics of gliomas in SSA (i.e., suspected higher rates of
gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique
opportunity to include brain MRI glioma cases from SSA in global efforts
through the BraTS Challenge to develop and evaluate computer-aided-diagnostic
(CAD) methods for the detection and characterization of glioma in
resource-limited settings, where the potential for CAD tools to transform
healthcare are more likely. |
Construction d'une plate-forme intégrée pour la cartographie de l'exposition des populations aux substances chimiques de l'environnement | L'analyse du lien entre l'environnement et la sant\'e est devenue une
pr\'eoccupation majeure de sant\'e publique comme en t\'emoigne l'\'emergence
des deux Plans nationaux sant\'e environnement. Pour ce faire, les d\'ecideurs
sont confront\'es au besoin de d\'eveloppement d'outils n\'ecessaires \`a
l'identification des zones g\'eographiques dans lesquelles une surexposition
potentielle \`a des substances toxiques est observ\'ee. L'objectif du projet
Syst\`eme d'information g\'eographique (SIG), facteurs de risques
environnementaux et d\'ec\`es par cancer (SIGFRIED 1) est de construire une
plate-forme de mod\'elisation permettant d'\'evaluer, par une approche
spatiale, l'exposition de la population fran\c{c}aise aux substances chimiques
et d'en identifier ses d\'eterminants. L'\'evaluation des expositions est
r\'ealis\'ee par le biais d'une mod\'elisation multim\'edia probabiliste. Les
probl\`emes \'epist\'emologiques li\'es \`a l'absence de donn\'ees sont
palli\'es par la mise en {\oe}uvre d'outils utilisant les techniques d'analyse
spatiale. Un exemple est fourni sur la r\'egion Nord-Pas-de-Calais et Picardie,
pour le cadmium, le nickel et le plomb. Le calcul de l'exposition est
r\'ealis\'e sur une dur\'ee de 70 ans sur la base des donn\'ees disponibles
autour de l'ann\'ee 2004 sur une maille de 1 km de c\^ot\'e. Par exemple pour
le Nord-Pas-de-Calais, les indicateurs permettent de d\'efinir deux zones pour
le cadmium et trois zones pour le plomb. Celles-ci sont li\'ees \`a
l'historique industriel de la r\'egion : le bassin minier, les activit\'es
m\'etallurgiques et l'agglom\'eration lilloise. La contribution des
diff\'erentes voies d'exposition varie sensiblement d'un polluant \`a l'autre.
Les cartes d'exposition ainsi obtenues permettent d'identifier les zones
g\'eographiques dans lesquelles conduire en priorit\'e des \'etudes
environnementales de terrains. Le SIG construit constitue la base d'une
plate-forme o\`u les donn\'ees d'\'emission \`a la source, de mesures
environnementales, d'exposition, puis sanitaires et socio-\'economiques
pourront \^etre associ\'ees.
--
Analysis of the association between the environment and health has become a
major public health concern, as shown by the development of two national
environmental health plans. For such an analysis, policy-makers need tools to
identify the geographic areas where overexposure to toxic agents may be
observed. The objective of the SIGFRIED 1 project is to build a work station
for spatial modeling of the exposure of the French population to chemical
substances and for identifying the determinants of this exposure. Probabilistic
multimedia modeling is used to assess exposure. The epistemological problems
associated with the absence of data are overcome by the implementation of tools
that apply spatial analysis techniques. An example is furnished for the region
of Nord-Pas-de-Calais and Picardie, for cadmium, nickel and lead exposure. The
calculation of exposure is performed for duration of 70 years on the basis of
data collected around 2004 fora grid of squares 1 km in length. For example,
for Nord-Pas-de-Calais, the indicators allow us to define two areas for cadmium
and three for lead. They are linked to the region's industrial history: mining
basin, metallurgy activities, and the Lille metropolitan area. The contribution
of various exposure pathways varied substantially from one pollutant to
another. The exposure maps thus obtained allow us to identify the geographic
area where environmental studies must be conducted in priority. The GIS thus
constructed is the foundation of a workstation where source emission data,
environmental exposure measurements, and finally health and socioeconomic
measurements can be combined. |
The LUX-ZEPLIN (LZ) Experiment | We describe the design and assembly of the LUX-ZEPLIN experiment, a direct
detection search for cosmic WIMP dark matter particles. The centerpiece of the
experiment is a large liquid xenon time projection chamber sensitive to low
energy nuclear recoils. Rejection of backgrounds is enhanced by a Xe skin veto
detector and by a liquid scintillator Outer Detector loaded with gadolinium for
efficient neutron capture and tagging. LZ is located in the Davis Cavern at the
4850' level of the Sanford Underground Research Facility in Lead, South Dakota,
USA. We describe the major subsystems of the experiment and its key design
features and requirements. |
Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species | Background - The process of generating raw genome sequence data continues to
become cheaper, faster, and more accurate. However, assembly of such data into
high-quality, finished genome sequences remains challenging. Many genome
assembly tools are available, but they differ greatly in terms of their
performance (speed, scalability, hardware requirements, acceptance of newer
read technologies) and in their final output (composition of assembled
sequence). More importantly, it remains largely unclear how to best assess the
quality of assembled genome sequences. The Assemblathon competitions are
intended to assess current state-of-the-art methods in genome assembly. Results
- In Assemblathon 2, we provided a variety of sequence data to be assembled for
three vertebrate species (a bird, a fish, and snake). This resulted in a total
of 43 submitted assemblies from 21 participating teams. We evaluated these
assemblies using a combination of optical map data, Fosmid sequences, and
several statistical methods. From over 100 different metrics, we chose ten key
measures by which to assess the overall quality of the assemblies. Conclusions
- Many current genome assemblers produced useful assemblies, containing a
significant representation of their genes, regulatory sequences, and overall
genome structure. However, the high degree of variability between the entries
suggests that there is still much room for improvement in the field of genome
assembly and that approaches which work well in assembling the genome of one
species may not necessarily work well for another. |
Beyond Low Earth Orbit: Biomonitoring, Artificial Intelligence, and Precision Space Health | Human space exploration beyond low Earth orbit will involve missions of
significant distance and duration. To effectively mitigate myriad space health
hazards, paradigm shifts in data and space health systems are necessary to
enable Earth-independence, rather than Earth-reliance. Promising developments
in the fields of artificial intelligence and machine learning for biology and
health can address these needs. We propose an appropriately autonomous and
intelligent Precision Space Health system that will monitor, aggregate, and
assess biomedical statuses; analyze and predict personalized adverse health
outcomes; adapt and respond to newly accumulated data; and provide preventive,
actionable, and timely insights to individual deep space crew members and
iterative decision support to their crew medical officer. Here we present a
summary of recommendations from a workshop organized by the National
Aeronautics and Space Administration, on future applications of artificial
intelligence in space biology and health. In the next decade, biomonitoring
technology, biomarker science, spacecraft hardware, intelligent software, and
streamlined data management must mature and be woven together into a Precision
Space Health system to enable humanity to thrive in deep space. |
Beyond Low Earth Orbit: Biological Research, Artificial Intelligence, and Self-Driving Labs | Space biology research aims to understand fundamental effects of spaceflight
on organisms, develop foundational knowledge to support deep space exploration,
and ultimately bioengineer spacecraft and habitats to stabilize the ecosystem
of plants, crops, microbes, animals, and humans for sustained multi-planetary
life. To advance these aims, the field leverages experiments, platforms, data,
and model organisms from both spaceborne and ground-analog studies. As research
is extended beyond low Earth orbit, experiments and platforms must be maximally
autonomous, light, agile, and intelligent to expedite knowledge discovery. Here
we present a summary of recommendations from a workshop organized by the
National Aeronautics and Space Administration on artificial intelligence,
machine learning, and modeling applications which offer key solutions toward
these space biology challenges. In the next decade, the synthesis of artificial
intelligence into the field of space biology will deepen the biological
understanding of spaceflight effects, facilitate predictive modeling and
analytics, support maximally autonomous and reproducible experiments, and
efficiently manage spaceborne data and metadata, all with the goal to enable
life to thrive in deep space. |
Federated Learning Enables Big Data for Rare Cancer Boundary Detection | Although machine learning (ML) has shown promise in numerous domains, there
are concerns about generalizability to out-of-sample data. This is currently
addressed by centrally sharing ample, and importantly diverse, data from
multiple sites. However, such centralization is challenging to scale (or even
not feasible) due to various limitations. Federated ML (FL) provides an
alternative to train accurate and generalizable ML models, by only sharing
numerical model updates. Here we present findings from the largest FL study
to-date, involving data from 71 healthcare institutions across 6 continents, to
generate an automatic tumor boundary detector for the rare disease of
glioblastoma, utilizing the largest dataset of such patients ever used in the
literature (25,256 MRI scans from 6,314 patients). We demonstrate a 33%
improvement over a publicly trained model to delineate the surgically
targetable tumor, and 23% improvement over the tumor's entire extent. We
anticipate our study to: 1) enable more studies in healthcare informed by large
and diverse data, ensuring meaningful results for rare diseases and
underrepresented populations, 2) facilitate further quantitative analyses for
glioblastoma via performance optimization of our consensus model for eventual
public release, and 3) demonstrate the effectiveness of FL at such scale and
task complexity as a paradigm shift for multi-site collaborations, alleviating
the need for data sharing. |
Segment Anything Model (SAM) Meets Glass: Mirror and Transparent Objects Cannot Be Easily Detected | Meta AI Research has recently released SAM (Segment Anything Model) which is
trained on a large segmentation dataset of over 1 billion masks. As a
foundation model in the field of computer vision, SAM (Segment Anything Model)
has gained attention for its impressive performance in generic object
segmentation. Despite its strong capability in a wide range of zero-shot
transfer tasks, it remains unknown whether SAM can detect things in challenging
setups like transparent objects. In this work, we perform an empirical
evaluation of two glass-related challenging scenarios: mirror and transparent
objects. We found that SAM often fails to detect the glass in both scenarios,
which raises concern for deploying the SAM in safety-critical situations that
have various forms of glass. |
The Change You Want to See | We live in a dynamic world where things change all the time. Given two images
of the same scene, being able to automatically detect the changes in them has
practical applications in a variety of domains. In this paper, we tackle the
change detection problem with the goal of detecting "object-level" changes in
an image pair despite differences in their viewpoint and illumination. To this
end, we make the following four contributions: (i) we propose a scalable
methodology for obtaining a large-scale change detection training dataset by
leveraging existing object segmentation benchmarks; (ii) we introduce a
co-attention based novel architecture that is able to implicitly determine
correspondences between an image pair and find changes in the form of bounding
box predictions; (iii) we contribute four evaluation datasets that cover a
variety of domains and transformations, including synthetic image changes, real
surveillance images of a 3D scene, and synthetic 3D scenes with camera motion;
(iv) we evaluate our model on these four datasets and demonstrate zero-shot and
beyond training transformation generalization. |
Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot. | Despite rising popularity and performance, studies evaluating the use of large language models for clinical decision support are lacking. Here, we evaluate ChatGPT (Generative Pre-trained Transformer)-3.5 and GPT-4's (OpenAI, San Francisco, California) capacity for clinical decision support in radiology via the identification of appropriate imaging services for two important clinical presentations: breast cancer screening and breast pain.
We compared ChatGPT's responses to the ACR Appropriateness Criteria for breast pain and breast cancer screening. Our prompt formats included an open-ended (OE) and a select all that apply (SATA) format. Scoring criteria evaluated whether proposed imaging modalities were in accordance with ACR guidelines. Three replicate entries were conducted for each prompt, and the average of these was used to determine final scores.
Both ChatGPT-3.5 and ChatGPT-4 achieved an average OE score of 1.830 (out of 2) for breast cancer screening prompts. ChatGPT-3.5 achieved a SATA average percentage correct of 88.9%, compared with ChatGPT-4's average percentage correct of 98.4% for breast cancer screening prompts. For breast pain, ChatGPT-3.5 achieved an average OE score of 1.125 (out of 2) and a SATA average percentage correct of 58.3%, as compared with an average OE score of 1.666 (out of 2) and a SATA average percentage correct of 77.7%.
Our results demonstrate the eventual feasibility of using large language models like ChatGPT for radiologic decision making, with the potential to improve clinical workflow and responsible use of radiology services. More use cases and greater accuracy are necessary to evaluate and implement such tools. |
Performance of Generative Large Language Models on Ophthalmology Board Style Questions. | To investigate the ability of generative artificial intelligence models to answer ophthalmology board style questions DESIGN: Experimental study.
This study evaluated three large language models (LLMs) with chat interfaces, Bing Chat (Microsoft) and ChatGPT 3.5 and 4.0 (OpenAI), using 250 questions from the Basic Science and Clinical Science (BCSC) Self-Assessment Program (SAP). While ChatGPT is trained on information last updated in 2021, Bing Chat incorporates more recently indexed internet search to generate its answers. Performance was compared to human respondents. Questions were categorized by complexity and patient care phase, and instances of information fabrication or non-logical reasoning were documented.
Primary outcome: response accuracy.
performance in question subcategories and hallucination frequency.
Human respondents had an average accuracy of 72.2%. ChatGPT-3.5 scored the lowest (58.8%), while ChatGPT-4.0 (71.6%) and Bing Chat (71.2%) performed comparably. ChatGPT-4.0 excelled in workup-type questions (OR = 3.89, 95% CI 1.19-14.73, p = 0.03) compared with diagnostic questions, but struggled with image interpretation (OR = 0.14, 95% CI 0.05-0.33, p < 0.01) when compared with single step reasoning questions. Against single step questions, Bing Chat also faced difficulties with image interpretation (OR = 0.18, 95% CI 0.08-0.44, p < 0.01) and multi-step reasoning (OR = 0.30, 95% CI 0.11-0.84, p = 0.02). ChatGPT-3.5 had the highest rate of hallucinations or non-logical reasoning (42.4%), followed by ChatGPT-4.0 (18.0%) and Bing Chat (25.6%).
LLMs (particularly ChatGPT-4.0 and Bing Chat) can perform similarly with human respondents answering questions from the BCSC SAP. The frequency of hallucinations and non-logical reasoning suggest room for improvement in the performance of conversational agents in the medical domain. |
Genital and Extragenital Lichen Sclerosus et Atrophicus: A Case Series Written Using ChatGPT. | Background Lichen sclerosus et atrophicus (LSEA) is a chronic inflammatory dermatosis of genital and extragenital sites with a prevalence ranging from 9% in prepubertal patients to 50% in postmenopausal patients. Chat generative pre-trained transformer (ChatGPT) is an artificial intelligence tool designed to assist humans based on supervised and reinforcement techniques. In this study, we aimed to evaluate the characteristics of patients with LSEA using ChatGPT. Methods In this retrospective study, we included all patients who presented to the outpatient dermatology department during 2017-2022 at a tertiary care teaching hospital in South India. Information regarding demographic data, characteristics of LSEA, comorbidities, and associated autoimmune disorders was gathered using a medical chart review. Following data analysis and drafting of the manuscript, the utility of ChatGPT-3 and ChatGPT-4 in finalizing the draft was assessed. Results Of 20 patients diagnosed with LSEA, 16 (80%) and four (20%) patients were females and males, respectively. Of them, 50% of female patients had attained menopause. While 65% of patients had genital LSEA, 30% of patients had extragenital LSEA only, and 5% of patients had both genital and extragenital LSEA. Furthermore, four (20%) patients were prepubertal children. Of four male patients, two (50%) were younger than 18 years of age, and one patient was diagnosed with balanitis xerotica obliterans. The commonest associated features in LSEA included joint involvement (30%), hypertension (25%), and anemia (15%). Rare concomitant disorders included psoriasis, asthma, and basal cell carcinoma over the nose. Conclusions LSEA may be confused with other various dermatoses, such as morphea, vitiligo, and lichen planus. A high index of suspicion is required, especially in children, to diagnose it early and intervene to prevent further complications. Its relationship with autoimmune disorders and comorbidities warrants further large-scale studies. ChatGPT was unreliable in the literature search due to the provision of non-existent citations. ChatGPT-4 was better than ChatGPT-3 since it provided few true publications. ChatGPT was used in this study to summarize the articles identified by the authors during the literature search and to correct grammatical errors in the final draft of the manuscript. |
Performance of ChatGPT on dermatology Specialty Certificate Examination multiple choice questions. | ChatGPT is a large language model trained on increasingly large datasets by OpenAI to perform language-based tasks. It is capable of answering multiple-choice questions, such as those posed by the dermatology SCE examination. We asked two iterations of ChatGPT: ChatGPT-3.5 and ChatGPT-4 84 multiple-choice sample questions from the sample dermatology SCE question bank. ChatGPT-3.5 achieved an overall score of 63.1%, and ChatGPT-4 scored 90.5% (a significant improvement in performance (p<0.001)). The typical pass mark for the dermatology SCE is 70-72%. ChatGPT-4 is therefore capable of answering clinical questions and achieving a passing grade in these sample questions. There are many possible educational and clinical implications for increasingly advanced artificial intelligence (AI) and its use in medicine, including in the diagnosis of dermatological conditions. Such advances should be embraced provided that patient safety is a core tenet, and the limitations of AI in the nuances of complex clinical cases are recognised. |
Chat Generative Pretrained Transformer Fails the Multiple-Choice American College of Gastroenterology Self-Assessment Test. | Chat Generative Pretrained Transformer (ChatGPT) is a natural language processing model that generates human-like text.
ChatGPT-3 and ChatGPT-4 were used to answer the 2022 and 2021 American College of Gastroenterology self-assessment tests. The exact questions were inputted in both versions of ChatGPT. A score of 70% or higher was required to pass the assessment.
Overall, ChatGPT-3 scored 65.1% on 455 included questions and GPT-4 scored 62.4%.
ChatGPT did not pass the American College of Gastroenterology self-assessment test. We do not recommend its use for medical education in gastroenterology in its current form. |