parrot.ai / documentation.html
PabloRomeroAgudo's picture
Update documentation.html
748ba74 verified
raw history blame
No virus
14.9 kB
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>PARROT - Documentation</title>
<link rel="stylesheet" type="text/css" href="style.css">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
<a class="navbar-brand" href="#">PARROT</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNav"
aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarNav">
<div class="navbar-nav">
<a class="nav-link" href="index.html">Home</a>
<a class="nav-link active" href="#">Documentation</a>
<a class="nav-link" href="ai_models.html">AI models</a>
<a class="nav-link" href="contact.html">Contact</a>
</div>
</div>
</nav>
<button id="topBtn" onclick="topFunction()"></button>
<div class="container mt-5">
<div class="row">
<div class="col-lg-11">
<h1 class="mb-4 mt-4">Documentation</h1>
<section id="installation">
<h2>Installation</h2>
<p>Instructions for installing PARROT on your system... Warning: The Orthanc server will run on port 8042 of your local computer and the nginx server on port 2000.</p>
<p>The installation package includes all servers. However, the Orthanc server is a manual installation. The installation instructions can be found below.</p>
</section>
<section id="PARROT-local-application">
<h2>PARROT Local Application</h2>
<p>
The <strong>PARROT Local Application</strong> is a web application (that means that its user interface runs in your web browser) that you can install and run locally on your computer.
The application provides different modules that allow you to
</p>
<ul>
<li>
<p>run ML-models.</p>
</li>
<li>
<p>visualize CT and MR images.</p>
</li>
<li>
<p>visualize, predict or modify segmentation information.</p>
</li>
<li>
<p>visualize, evaluate, and compare dose treatment plans.</p>
</li>
</ul>
<p>
You can load patient studies into the application in the form of DICOM files. Internally, the application uses an nginx server to run the web application, a Python server for the
prediction of the integrated trained AI model, and an Orthanc server for the storage of the study data. The installation package includes all servers. However, the Orthanc server is a
manual installation.
</p>
<p class="warning">
⚠ The Orthanc server will run on <a target="_blank" href="http://localhost:8042/">port 8042</a> of your local computer and
the nginx server on <a target="_blank" href="http://localhost:2000/">port 2000</a>.
</p>
</section>
<section id="stopping-the-application">
<h2>Stopping the application</h2>
<ul>
<li>
<p>
The nginx server will continue running, even when you close the web browser window. To completely stop the server, double click the <strong>'Stop_PARROT.bat'</strong> file in the <strong>'parrot'</strong> folder (or stop the nginx processes in the Task Manager).
</p>
</li>
<li>
<p>
The Orthanc server runs as a Windows service and has to be stopped in the <strong>'Task Manager'</strong> (in the tab <strong>'Services'</strong>). Don't forget to start the service again when you want to use the <strong>'PARROT application'</strong>.
</p>
</li>
</ul>
</section>
<section id="folder-structure">
<h2>Folder Structure</h2>
<p>The <strong>'parrot'</strong> folder contains the following files:</p>
<ul>
<li>
<p><strong>'aiIntegration'</strong> contains the integrated trained AI models, the Python server environment and the API server directories.</p>
</li>
<li>
<p><strong>'dist'</strong> contains the PARROT web application.</p>
</li>
<li>
<p><strong>'logs'</strong> contains the log files of the nginx server.</p>
</li>
<li>
<p><strong>'temp'</strong> contains the temporary files of nginx.</p>
</li>
<li>
<p><strong>'tempai'</strong> contains the results of running the Python code. The user can configure the path for AI integration.</p>
</li>
<li>
<p><strong>'License'</strong> contains the License of the application.</p>
</li>
<li>
<p><strong>'nginx.exe'</strong> starts the nginx server (see the instructions above).</p>
</li>
<li>
<p><strong>'Start_PARROT.bat'</strong> start the nginx server (see the instructions above).</p>
</li>
<li>
<p><strong>'Stop_PARROT.bat'</strong> stop the ngnix server (see the instructions above).</p>
</li>
<li>
<p><strong>'unins000.exe'</strong> uninstalls the application from the computer.</p>
</li>
</ul>
</section>
<section id="patient-management">
<h2>Patient Management</h2>
<p>Users can import patient data (files or zip) from this screen to a local Orthanc daemon—a lightweight, standalone DICOM server, which ensures the confidentiality of patient files. A list of patient data present in the Orthanc server is displayed with summarized information. Users can load the patient into the app by clicking on the Load button. Users can delete a patient from the server with the red arrow on the right.</p>
<figure>
<img src="img/screen_patient_management.PNG" alt="Patient Management Image" class="img-fluid mb-3">
<figcaption>Figure ?: Patient Management screen.</figcaption>
</figure>
</section>
<section id="study-management">
<h2>Study Management</h2>
<p>Users can import patient data from this screen to a local Orthanc daemon—a lightweight, standalone DICOM server, which ensures the confidentiality of patient files. A list of patient data present in the Orthanc server is displayed with summarized information. Users can load the data into the app by clicking on the Load button. The left icon displays more complete information from DICOM tags and users can delete a patient study from the server with the red arrow on the right.</p> <p>A list of studies of the loaded patient present in the Orthanc server is displayed with summarized information. Users can load the data into the app by clicking on the Load button. The left icon displays more complete information from DICOM tags and users can delete a patient study from the server with the red arrow on the right.</p>
<figure>
<img src="img/screen_study_management.png" alt="Study Management Image" class="img-fluid mb-3">
<figcaption>Figure ?: Study Management screen.</figcaption>
</figure>
</section>
<section id="ai-models-management">
<h2>AI Models Management</h2>
<figure>
<img src="img/screen_ai_models_management.png" alt="AI Models Management Image" class="img-fluid mb-3">
<figcaption>Figure ?: AI Models Management screen.</figcaption>
</figure>
<p>Eight segmentation models and six dose prediction models are made available and listed on the screen (Figure 2). These models are trained from state-of-the-art architectures such as nnUNet, SwinUNETR and HDUNet (Table 1). On the left hand side of each line, a standardized description of the model is available. The PDF description file contains details about the architecture with a link of the original research article, details on the training process including dataset, and performance on our test set (Figure 3). Users also have the flexibility to run their own models using a Python scripting editor and display the output in the viewer (Figure 4). We distinguish two categories of models: the segmentation models and the dose prediction models. A key difference between the two categories is that segmentation models require only the anatomy of the patient (CT scan or MRI image) whereas dose prediction models also require binary masks of tumor volumes (TV) and organs at risk (OARs). For that latter, we implemented a dialog that allows users to map structures present in the current patient data and the input required for the AI dose prediction model (Figure 5).</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/example_video_code" frameborder="0" allowfullscreen class="mb-4"></iframe>
</section>
<section id="patient-modeling">
<h2>Patient Modeling</h2>
<figure>
<img src="img/screen_patient_modeling.png" alt="Patient Modeling Image" class="img-fluid mb-3">
<figcaption>Figure ?: Patient Modeling screen.</figcaption>
</figure>
<p>Eight segmentation models and six dose prediction models are made available and listed on the screen (Figure 3). These models are trained from state-of-the-art architectures such as nnUNet [2], SwinUNETR [3] and HDUNet [4] (Table 1). On the left hand side of each line, a standardized description of the model is available. The PDF description file contains details about the architecture with a link of the original research article, details on the training process including dataset, and performance on our test set (Figure 4).
To run a model prediction, the user simply clicks on the yellow Prediction button on the right hand side of the model line. This will open a Python scripting editor to run the code (Figure 5). Two choices are available to the user: either run the code locally or in one of the Python environments provided. The latter option provides a Python version with a set of libraries. Users can select a Python environment with the drop-down menu in the upper right corner and by hovering on it, a description of the libraries they contain is displayed. Once the prediction is done, users can upload the result to the study to visualize the output in the Patient Modeling screen (Figure 7). Users also have the flexibility to run their own models by loading their own code in the Python scripting editor. On our website, we provide a description and examples using the API to retrieve the patient information loaded on PARROT and upload the result.
We distinguish two categories of models: the segmentation models and the dose prediction models. A key difference between the two categories is that segmentation models require only the anatomy of the patient (CT scan or MRI image) whereas dose prediction models also require binary masks of tumor volumes (TV) and organs at risk (OARs). For that latter, we implemented a dialog that allows users to map structures present in the current patient data and the input required for the AI dose prediction model (Figure 6).
</p>
</section>
<section id="plan-evaluation">
<h2>Plan Evaluation</h2>
<figure>
<img src="img/screen_plan_evaluation.png" alt="Plan Evaluation Image" class="img-fluid mb-3">
<figcaption>Figure ?: Plan Evaluation screen.</figcaption>
</figure>
<p>A comprehensive display of two dose distributions comparison is presented on the last screen. The two dose distributions can originate from the importation step or result from the inference of ai models. We provide a set of tools for comparison, such as DVH curves, dose statistics, and the fulfillment of clinical goals. Clinical goals can be uploaded following an Excel template so that it fits any clinical requirements of the users institutions. The platform incorporates treatment indications based on NTCP protocols. It has been shown in multiple studies that predicted dose distributions yield meaningful output when used with NTCP models for treatment selection decision support [5] [6]. This is a useful decision support tool for the clinical community.
</p>
</section>
</div>
<div class="col-lg-4">
<nav id="right-menu" class="navbar navbar-light bg-light">
<nav class="nav flex-column">
</nav>
</nav>
</div>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
<script>
let mybutton = document.getElementById("topBtn");
window.onscroll = function() {scrollFunction()};
function scrollFunction() {
console.log(mybutton.id);
if (document.body.scrollTop > 50 || document.documentElement.scrollTop > 50) { mybutton.style.display = "block";}
else { mybutton.style.display = "none";}
}
function topFunction() {
document.body.scrollTop = 0;
document.documentElement.scrollTop = 0;
}
</script>
</body>
</html>