Spaces:
Running
Running
File size: 18,601 Bytes
83098c2 24d75fd 83098c2 41bc27a 83098c2 24d75fd 83098c2 24d75fd 83098c2 24d75fd 83098c2 2139cb3 83098c2 24d75fd 2139cb3 24d75fd 6f96365 24d75fd 6f96365 24d75fd 6f96365 24d75fd 6f96365 24d75fd 6f96365 24d75fd 6f96365 83098c2 24d75fd 6f96365 24d75fd 6f96365 83098c2 24d75fd 83098c2 24d75fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>PARROT - Documentation</title>
<link rel="stylesheet" type="text/css" href="style.css">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
<a class="navbar-brand" href="#">PARROT</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNav"
aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarNav">
<div class="navbar-nav">
<a class="nav-link" href="index.html">Home</a>
<a class="nav-link active" href="#">Documentation</a>
<a class="nav-link" href="ai_models.html">AI models</a>
<a class="nav-link" href="contact.html">Contact</a>
</div>
</div>
</nav>
<button id="topBtn" onclick="topFunction()">↑</button>
<div class="container mt-5">
<div class="row">
<div class="col-lg-9">
<h1 class="mb-4 mt-4">Documentation</h1>
<section id="PARROT">
<h2>PARROT: Platform for ARtificial intelligence guided Radiation Oncology Treatment</h2>
<p>
<strong>PARROT</strong> is a local web application. It provides different modules that allow
users to:
</p>
<ul>
<li>
<p>Run AI models.</p>
</li>
<li>
<p>Visualize CT and (soon) MR images.</p>
</li>
<li>
<p>Visualize, predict or modify segmentation information.</p>
</li>
<li>
<p>Visualize, predict, evaluate, and compare dose distributions.</p>
</li>
</ul>
</section>
<section id="installation">
<h2>Installation steps</h2>
<h3>PARROT</h3>
<p>There are two ways to install PARROT, downloading the executable file of the last release from
the <a href="index.html">Home</a> page is recommended. Otherwise, the code with building
instructions are also available on our <a href="https://gitlab.com/ai4miro/parrot">Gitlab
repository</a>.</p>
<h3>Orthanc</h3>
<p>
Patient data are stored using a Orthanc daemon, a lightweight, standalone DICOM server, which
ensures the confidentiality of patient files. To install Orthanc, either click on the link on
PARROT's home tab (see Figure 1) or click on the following link: <a
href="https://orthanc.uclouvain.be/downloads/windows-64/installers/OrthancInstaller-Win64-24.2.3.exe">Install
Orthanc</a>.
</p>
<figure>
<img src="img/install_orthanc.png" alt="Install orthanc" class="img-fluid mb-3">
<figcaption>Figure 1: Home screen, link to install Orthanc.</figcaption>
</figure>
<p class="warning">
⚠ The Orthanc server will run on <a target="_blank" href="http://localhost:8042/">port 8042</a>
while nginx server for the application will run on <a target="_blank"
href="http://localhost:2000/">port 2000</a>.
</p>
</section>
<section id="organisation">
<h2>Organisation of the application</h2>
The application is divided in multiple tabs that users can visit through the top navigation bar.
This section provides a description of each tabs and their features.
<section id="patient-management">
<h3>Patient Management</h3>
<p>Users can import patient DICOM data (files or zip) from this screen to a local <a
href="https://www.orthanc-server.com/">Orthanc</a> daemon. A list of patient data
present in the Orthanc server is displayed with summarized information. Users can load the
patient data to PARROT by clicking on the Load button. Users can delete a patient from the
server with the red arrow on the right.</p>
<figure>
<img src="img/screen_patient_management.PNG" alt="Patient Management Image"
class="img-fluid mb-3">
<figcaption>Figure 2: Patient Management screen.</figcaption>
</figure>
</section>
<section id="study-management">
<h3>Study Management</h3>
<p>A list of studies of the loaded patient present in the Orthanc server is displayed with
summarized information. Users can load the data into the app by clicking on the Load button.
The left icon displays more complete information from DICOM tags and users can delete a
patient study from the server with the red arrow on the right.</p>
<figure>
<img src="img/screen_study_management.png" alt="Study Management Image"
class="img-fluid mb-3">
<figcaption>Figure 3: Study Management screen.</figcaption>
</figure>
</section>
<section id="ai-marketplace">
<h3>AI Marketplace</h3>
<figure>
<img src="img/screen_ai_marketplace.png" alt="AI Marketplace Image"
class="img-fluid mb-3">
<figcaption>Figure 4: AI Marketplace screen.</figcaption>
</figure>
<p>Eight segmentation models and six dose prediction models are made available on HuggingFace and can be downloaded from the AI Marketplace screen (Figure 4).
These models are trained from state-of-the-art architectures such as
nnUNet <a href="#ref1">[1]</a>, SwinUNETR <a href="#ref2">[2]</a> and HDUNet <a href="#ref3">[3]</a> (<a href="ai_models.html">AI models</a>).
We distinguish two categories of models: the segmentation models and the dose prediction
models. A key difference between the two categories is that segmentation models require only
the anatomy of the patient (CT scan or MRI image) whereas dose prediction models also
require binary masks of tumor volumes (TV) and organs at risk (OARs). For that latter, we
implemented a dialog that allows users to map structures present in the current patient data
and the input required for the AI dose prediction model (Figure 7).</p>
<p>To be able to use these models, we also provide a Dockerfile that is automatically used to generate a Docker container to then run the models.</p>
</section>
<section id="ai-models-management">
<h3>Run AI Models</h3>
<figure>
<img src="img/screen_ai_models_management.png" alt="AI Models Management Image"
class="img-fluid mb-3">
<figcaption>Figure 5: AI Models Management screen.</figcaption>
</figure>
<p>To run a model prediction, the user simply clicks on the yellow Prediction button on the
right hand side of the model line. This will open a Python scripting editor to run the code
(Figure 6). Two choices are available to the user: either run the code locally or in one of
the Python environments provided. Users also have the flexibility to run their own
models by loading their own code in the Python scripting editor.
</p>
<div class="row">
<figure class="col-5">
<img src="img/python_code_editor.png" alt="Code editor"
class="img-fluid mb-3">
<figcaption>Figure 6: Embedded python code editor.</figcaption>
</figure>
<figure class="col-6">
<img src="img/contour_matching.png" alt="Input matching"
class="img-fluid mb-3">
<figcaption>Figure 7: Input matching.</figcaption>
</figure>
</div>
</section>
<section id="patient-modeling">
<h3>Patient Modeling</h3>
<figure>
<img src="img/screen_patient_modeling.png" alt="Patient Modeling Image"
class="img-fluid mb-3">
<figcaption>Figure 8: Patient Modeling screen.</figcaption>
</figure>
<p>Once the prediction is done, users can upload the result to the study to visualize the output in
the Patient Modeling screen (Figure 8).
</p>
</section>
<section id="plan-evaluation">
<h3>Plan Evaluation</h3>
<figure>
<img src="img/screen_plan_evaluation.png" alt="Plan Evaluation Image"
class="img-fluid mb-3">
<figcaption>Figure 9: Plan Evaluation screen.</figcaption>
</figure>
<p>A comprehensive display of two dose distributions comparison is presented on the last screen (see Figure 9).
The two dose distributions can originate from the importation step or result from the
inference of ai models. We provide a set of tools for comparison, such as DVH curves, dose
statistics, and the fulfillment of clinical goals. Clinical goals can be uploaded following
an Excel template so that it fits any clinical requirements of the users institutions. The
platform incorporates treatment indications based on NTCP protocols. It has been shown in
multiple studies that predicted dose distributions yield meaningful output when used with
NTCP models for treatment selection decision support <a href="#ref4">[4]</a><a href="#ref5">[5]</a>. This is a useful decision
support tool for the clinical community.
</p>
</section>
<section>
<h3>References</h3>
<p id="ref1">[1] Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring
method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.</p>
<p id="ref2">[2] Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.</p>
<p id="ref3">[3] Nguyen D, Jia X, Sher D, Lin MH, Iqbal Z, Liu H, Jiang S. 3D radiotherapy dose prediction on head and neck cancer patients with a hierarchically densely connected U-net deep learning architecture. Phys Med Biol. 2019 Mar 18;64(6):065020. doi: 10.1088/1361-6560/ab039b. PMID: 30703760.</p>
<p id="ref4">[4] Huet-Dastarac M, Michiels S, Rivas ST, et al. Patient selection for proton therapy using Normal Tissue Complication Probability with deep learning dose prediction for oropharyngeal cancer. Med Phys. 2023; 50: 6201–6214. https://doi.org/10.1002/mp.16431</p>
<p id="ref5">[5] Draguet C., Barragán-Montero A., Chocan Vera M., Thomas M., Populaire P., Defraene G., Haustermans K., Lee J., Sterpin E.,
Automated clinical decision support system with deep learning dose prediction and NTCP models to evaluate treatment complications in patients with esophageal cancer, Radiotherapy and Oncology, Volume 176, 2022, https://doi.org/10.1016/j.radonc.2022.08.031.</p>
</section>
<section id="stopping-the-application">
<h3>Stopping the application</h3>
<ul>
<li>
<p>
The nginx server will continue running, even when you close the web browser window.
To completely stop the server, double click the <strong>'Stop_PARROT.bat'</strong>
file in the <strong>'parrot'</strong> folder (or stop the nginx processes in the
Task Manager).
</p>
</li>
<li>
<p>
The Orthanc server runs as a Windows service and has to be stopped in the
<strong>'Task Manager'</strong> (in the tab <strong>'Services'</strong>). Don't
forget to start the service again when you want to use the <strong>'PARROT
application'</strong>.
</p>
</li>
</ul>
</section>
</section>
<section id="user-guide">
<h2>User guide</h2>
<section id="user-guide">
<h3>Load patient</h3>
<iframe width="560" height="315"
src="https://www.youtube.com/embed/iXPMgLM7UYs?si=mNdDxmjIKIThRAfA"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</section>
<section id="edit-contours">
<h3>Edit contours</h3>
<iframe width="560" height="315"
src="https://www.youtube.com/embed/1YO3PYhbvzs?si=swi0iNqSAI0hjTSU"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</section>
</section>
</div>
<div class="col-lg-3">
<nav id="right-menu" class="navbar navbar-light bg-light">
<nav class="nav flex-column">
<a class="nav-link" href="#installation">Installation</a>
<a class="nav-link" href="#organisation">Organisation of the application</a>
<ul>
<li><a class="nav-link" href="#patient-management">Patient Management</a></li>
<li><a class="nav-link" href="#study-management">Study Management</a></li>
<li><a class="nav-link" href="#ai-models-management">AI Models Management</a></li>
<li><a class="nav-link" href="#patient-modeling">Patient Modeling</a></li>
<li><a class="nav-link" href="#plan-evaluation">Plan Evaluation</a></li>
</ul>
<a class="nav-link" href="#user-guide">User guide</a>
<ul>
<li><a class="nav-link" href="#load-patient">Load patient</a></li>
<li><a class="nav-link" href="#edit-contours">Edit contours</a></li>
</ul>
</nav>
</nav>
</div>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
<script>
let mybutton = document.getElementById("topBtn");
window.onscroll = function () { scrollFunction() };
function scrollFunction() {
console.log(mybutton.id);
if (document.body.scrollTop > 50 || document.documentElement.scrollTop > 50) { mybutton.style.display = "block"; }
else { mybutton.style.display = "none"; }
}
function topFunction() {
document.body.scrollTop = 0;
document.documentElement.scrollTop = 0;
}
</script>
</body>
</html> |