{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Creating a Multi-AI Deploy App with Multiple Models\n",
    "\n",
    "This tutorial shows how to create an inference application with multiple models, focusing on model files organization, accessing and inferring with named model network in the application, and finally building an app package.\n",
    "\n",
    "Typically multiple models will work in tandem, e.g. a lung segmentation model's output, along with the original image, are then used by a lung nodule detection and classification model. There is, however, a lack of such models in the [MONAI Model Zoo](https://github.com/Project-MONAI/model-zoo) as of now. So, for illustration purpose, two independent models will be used in this example, [Spleen Segmentation](https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation) and [Pancreas Segmentation](https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation), both are trained with DICOM images of CT modality, and both are packaged in the [MONAI Bundle](https://docs.monai.io/en/latest/bundle_intro.html) format. A single input of a CT Abdomen DICOM Series can be used for both models within the application.\n",
    "\n",
    "\n",
    "## Important Steps\n",
    "- Place the model TorchScripts in a defined folder structure, see below for details\n",
    "- Pass the model name to the inference operator instance in the app\n",
    "- Connect the input to and output from the inference operators, as required by the app\n",
    "\n",
    "## Required Model File Organization\n",
    "\n",
    "- The model files in TorchScript, be it MONAI Bundle compliant or not, must each be placed in an uniquely named folder. The name of this folder becomes the name of the loaded model network in the application, and is used by the application to retrieve the network via the execution context.\n",
    "- The folders containing the individual model file must then be placed under a parent folder. The name of this folder is chosen by the application developer.\n",
    "- The path of the aforementioned parent folder is used to set the well-known environment variable for the model path, `HOLOSCAN_MODEL_PATH`, when the application is directly run as a program.\n",
    "- When the application is packaged as an MONAI Application Package (MAP), the parent folder is used as the model path, and the Packager copies all of the sub folders to the well-known `models` folder in the MAP.\n",
    "\n",
    "## Example Model File Organization\n",
    "\n",
    "In this example, the models are organized as shown below.\n",
    "```\n",
    "multi_models\n",
    "├── pancreas_ct_dints\n",
    "│   └── model.ts\n",
    "└── spleen_ct\n",
    "    └── model.ts\n",
    "```\n",
    "\n",
    "Please note,\n",
    "\n",
    "- The `multi_models` is the parent folder, whose path is used to set the well-known environment variable for model path. When using App SDK CLI Packager to build the application package, this is the used as the path for models.\n",
    "- The sub-folder names become model network names, `pancreas_ct_dints` and `spleen_model`, respectively.\n",
    "\n",
    "In the following sections, we will demonstrate how to create and package the application using these two models.\n",
    "\n",
    ":::{note}\n",
    "The two models are both MONAI bundles, published in [MONAI Model Zoo](https://github.com/Project-MONAI/model-zoo)\n",
    "- [spleen_ct_segmentation, v0.3.2](https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation)\n",
    "- [pancreas_ct_dints_segmentation, v0.3.0](https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation)\n",
    "\n",
    "The DICOM CT series used as test input is downloaded from [TCIA](https://www.cancerimagingarchive.net/), CT Abdomen Collection ID `CPTAC-PDA` Subject ID `C3N-00198`.\n",
    "\n",
    "Both the DICOM files and the models have been packaged and shared on Google Drive.\n",
    ":::\n",
    "\n",
    "## Creating Operators and connecting them in Application class\n",
    "\n",
    "We will implement an application that consists of seven Operators:\n",
    "\n",
    "- **DICOMDataLoaderOperator**:\n",
    "    - **Input(dicom_files)**: a folder path (`Path`)\n",
    "    - **Output(dicom_study_list)**: a list of DICOM studies in memory (List[[`DICOMStudy`](/modules/_autosummary/monai.deploy.core.domain.DICOMStudy)])\n",
    "- **DICOMSeriesSelectorOperator**:\n",
    "    - **Input(dicom_study_list)**: a list of DICOM studies in memory (List[[`DICOMStudy`](/modules/_autosummary/monai.deploy.core.domain.DICOMStudy)])\n",
    "    - **Input(selection_rules)**: a selection rule (Dict)\n",
    "    - **Output(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "- **DICOMSeriesToVolumeOperator**:\n",
    "    - **Input(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "    - **Output(image)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "- **MonaiBundleInferenceOperator** x 2:\n",
    "    - **Input(image)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "    - **Output(pred)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "- **DICOMSegmentationWriterOperator** x2:\n",
    "    - **Input(seg_image)**: a segmentation image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "    - **Input(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "    - **Output(dicom_seg_instance)**: a file path (`Path`)\n",
    "\n",
    "\n",
    ":::{note}\n",
    "The `DICOMSegmentationWriterOperator` needs both the segmentation image as well as the original DICOM series for reusing the patient demographics and other DICOM Study level attributes, as well as referring to the original SOP instance UID.\n",
    ":::\n",
    "\n",
    "The workflow of the application is illustrated below.\n",
    "\n",
    "```{mermaid}\n",
    "%%{init: {\"theme\": \"base\", \"themeVariables\": { \"fontSize\": \"16px\"}} }%%\n",
    "\n",
    "classDiagram\n",
    "    direction TB\n",
    "    DICOMDataLoaderOperator --|> DICOMSeriesSelectorOperator : dicom_study_list...dicom_study_list\n",
    "    DICOMSeriesSelectorOperator --|> DICOMSeriesToVolumeOperator : study_selected_series_list...study_selected_series_list\n",
    "\n",
    "    DICOMSeriesToVolumeOperator --|> Spleen_BundleInferenceOperator : image...image\n",
    "    DICOMSeriesSelectorOperator --|> Spleen_DICOMSegmentationWriterOperator : study_selected_series_list...study_selected_series_list\n",
    "    Spleen_BundleInferenceOperator --|> Spleen_DICOMSegmentationWriterOperator : pred...seg_image\n",
    "\n",
    "    DICOMSeriesToVolumeOperator --|> Pancreas_BundleInferenceOperator : image...image\n",
    "    DICOMSeriesSelectorOperator --|> Pancreas_DICOMSegmentationWriterOperator : study_selected_series_list...study_selected_series_list\n",
    "    Pancreas_BundleInferenceOperator --|> Pancreas_DICOMSegmentationWriterOperator : pred...seg_image\n",
    "\n",
    "    class DICOMDataLoaderOperator {\n",
    "        <in>dicom_files : DISK\n",
    "        dicom_study_list(out) IN_MEMORY\n",
    "    }\n",
    "    class DICOMSeriesSelectorOperator {\n",
    "        <in>dicom_study_list : IN_MEMORY\n",
    "        <in>selection_rules : IN_MEMORY\n",
    "        study_selected_series_list(out) IN_MEMORY\n",
    "    }\n",
    "    class DICOMSeriesToVolumeOperator {\n",
    "        <in>study_selected_series_list : IN_MEMORY\n",
    "        image(out) IN_MEMORY\n",
    "    }\n",
    "    class Spleen_BundleInferenceOperator {\n",
    "        <in>image : IN_MEMORY\n",
    "        pred(out) IN_MEMORY\n",
    "    }\n",
    "    class Pancreas_BundleInferenceOperator {\n",
    "        <in>image : IN_MEMORY\n",
    "        pred(out) IN_MEMORY\n",
    "    }\n",
    "    class Spleen_DICOMSegmentationWriterOperator {\n",
    "        <in>seg_image : IN_MEMORY\n",
    "        <in>study_selected_series_list : IN_MEMORY\n",
    "        dicom_seg_instance(out) DISK\n",
    "    }\n",
    "    class Pancreas_DICOMSegmentationWriterOperator {\n",
    "        <in>seg_image : IN_MEMORY\n",
    "        <in>study_selected_series_list : IN_MEMORY\n",
    "        dicom_seg_instance(out) DISK\n",
    "    }\n",
    "```\n",
    "\n",
    "### Setup environment\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install MONAI and other necessary image processing packages for the application\n",
    "!python -c \"import monai\" || pip install --upgrade -q \"monai\"\n",
    "!python -c \"import torch\" || pip install -q \"torch>=1.12.0\"\n",
    "!python -c \"import numpy\" || pip install -q \"numpy>=1.21\"\n",
    "!python -c \"import nibabel\" || pip install -q \"nibabel>=3.2.1\"\n",
    "!python -c \"import pydicom\" || pip install -q \"pydicom>=1.4.2\"\n",
    "!python -c \"import highdicom\" || pip install -q \"highdicom>=0.18.2\"\n",
    "!python -c \"import SimpleITK\" || pip install -q \"SimpleITK>=2.0.0\"\n",
    "\n",
    "# Install MONAI Deploy App SDK package\n",
    "!python -c \"import monai.deploy\" || pip install -q \"monai-deploy-app-sdk\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: you may need to restart the Jupyter kernel to use the updated packages."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download/Extract input and model/bundle files from Google Drive\n",
    "\n",
    "**_Note:_** Data files are now access controlled. Please first request permission to access the [shared folder on Google Drive](https://drive.google.com/drive/folders/1EONJsrwbGsS30td0hs8zl4WKjihew1Z3?usp=sharing). Please download zip file, `ai_multi_model_bundle_data.zip` in the `ai_multi_ai_app` folder, to the same folder as the notebook example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Archive:  ai_multi_model_bundle_data.zip\n",
      "  inflating: dcm/1-001.dcm           \n",
      "  inflating: dcm/1-002.dcm           \n",
      "  inflating: dcm/1-003.dcm           \n",
      "  inflating: dcm/1-004.dcm           \n",
      "  inflating: dcm/1-005.dcm           \n",
      "  inflating: dcm/1-006.dcm           \n",
      "  inflating: dcm/1-007.dcm           \n",
      "  inflating: dcm/1-008.dcm           \n",
      "  inflating: dcm/1-009.dcm           \n",
      "  inflating: dcm/1-010.dcm           \n",
      "  inflating: dcm/1-011.dcm           \n",
      "  inflating: dcm/1-012.dcm           \n",
      "  inflating: dcm/1-013.dcm           \n",
      "  inflating: dcm/1-014.dcm           \n",
      "  inflating: dcm/1-015.dcm           \n",
      "  inflating: dcm/1-016.dcm           \n",
      "  inflating: dcm/1-017.dcm           \n",
      "  inflating: dcm/1-018.dcm           \n",
      "  inflating: dcm/1-019.dcm           \n",
      "  inflating: dcm/1-020.dcm           \n",
      "  inflating: dcm/1-021.dcm           \n",
      "  inflating: dcm/1-022.dcm           \n",
      "  inflating: dcm/1-023.dcm           \n",
      "  inflating: dcm/1-024.dcm           \n",
      "  inflating: dcm/1-025.dcm           \n",
      "  inflating: dcm/1-026.dcm           \n",
      "  inflating: dcm/1-027.dcm           \n",
      "  inflating: dcm/1-028.dcm           \n",
      "  inflating: dcm/1-029.dcm           \n",
      "  inflating: dcm/1-030.dcm           \n",
      "  inflating: dcm/1-031.dcm           \n",
      "  inflating: dcm/1-032.dcm           \n",
      "  inflating: dcm/1-033.dcm           \n",
      "  inflating: dcm/1-034.dcm           \n",
      "  inflating: dcm/1-035.dcm           \n",
      "  inflating: dcm/1-036.dcm           \n",
      "  inflating: dcm/1-037.dcm           \n",
      "  inflating: dcm/1-038.dcm           \n",
      "  inflating: dcm/1-039.dcm           \n",
      "  inflating: dcm/1-040.dcm           \n",
      "  inflating: dcm/1-041.dcm           \n",
      "  inflating: dcm/1-042.dcm           \n",
      "  inflating: dcm/1-043.dcm           \n",
      "  inflating: dcm/1-044.dcm           \n",
      "  inflating: dcm/1-045.dcm           \n",
      "  inflating: dcm/1-046.dcm           \n",
      "  inflating: dcm/1-047.dcm           \n",
      "  inflating: dcm/1-048.dcm           \n",
      "  inflating: dcm/1-049.dcm           \n",
      "  inflating: dcm/1-050.dcm           \n",
      "  inflating: dcm/1-051.dcm           \n",
      "  inflating: dcm/1-052.dcm           \n",
      "  inflating: dcm/1-053.dcm           \n",
      "  inflating: dcm/1-054.dcm           \n",
      "  inflating: dcm/1-055.dcm           \n",
      "  inflating: dcm/1-056.dcm           \n",
      "  inflating: dcm/1-057.dcm           \n",
      "  inflating: dcm/1-058.dcm           \n",
      "  inflating: dcm/1-059.dcm           \n",
      "  inflating: dcm/1-060.dcm           \n",
      "  inflating: dcm/1-061.dcm           \n",
      "  inflating: dcm/1-062.dcm           \n",
      "  inflating: dcm/1-063.dcm           \n",
      "  inflating: dcm/1-064.dcm           \n",
      "  inflating: dcm/1-065.dcm           \n",
      "  inflating: dcm/1-066.dcm           \n",
      "  inflating: dcm/1-067.dcm           \n",
      "  inflating: dcm/1-068.dcm           \n",
      "  inflating: dcm/1-069.dcm           \n",
      "  inflating: dcm/1-070.dcm           \n",
      "  inflating: dcm/1-071.dcm           \n",
      "  inflating: dcm/1-072.dcm           \n",
      "  inflating: dcm/1-073.dcm           \n",
      "  inflating: dcm/1-074.dcm           \n",
      "  inflating: dcm/1-075.dcm           \n",
      "  inflating: dcm/1-076.dcm           \n",
      "  inflating: dcm/1-077.dcm           \n",
      "  inflating: dcm/1-078.dcm           \n",
      "  inflating: dcm/1-079.dcm           \n",
      "  inflating: dcm/1-080.dcm           \n",
      "  inflating: dcm/1-081.dcm           \n",
      "  inflating: dcm/1-082.dcm           \n",
      "  inflating: dcm/1-083.dcm           \n",
      "  inflating: dcm/1-084.dcm           \n",
      "  inflating: dcm/1-085.dcm           \n",
      "  inflating: dcm/1-086.dcm           \n",
      "  inflating: dcm/1-087.dcm           \n",
      "  inflating: dcm/1-088.dcm           \n",
      "  inflating: dcm/1-089.dcm           \n",
      "  inflating: dcm/1-090.dcm           \n",
      "  inflating: dcm/1-091.dcm           \n",
      "  inflating: dcm/1-092.dcm           \n",
      "  inflating: dcm/1-093.dcm           \n",
      "  inflating: dcm/1-094.dcm           \n",
      "  inflating: dcm/1-095.dcm           \n",
      "  inflating: dcm/1-096.dcm           \n",
      "  inflating: dcm/1-097.dcm           \n",
      "  inflating: dcm/1-098.dcm           \n",
      "  inflating: dcm/1-099.dcm           \n",
      "  inflating: dcm/1-100.dcm           \n",
      "  inflating: dcm/1-101.dcm           \n",
      "  inflating: dcm/1-102.dcm           \n",
      "  inflating: dcm/1-103.dcm           \n",
      "  inflating: dcm/1-104.dcm           \n",
      "  inflating: dcm/1-105.dcm           \n",
      "  inflating: dcm/1-106.dcm           \n",
      "  inflating: dcm/1-107.dcm           \n",
      "  inflating: dcm/1-108.dcm           \n",
      "  inflating: dcm/1-109.dcm           \n",
      "  inflating: dcm/1-110.dcm           \n",
      "  inflating: dcm/1-111.dcm           \n",
      "  inflating: dcm/1-112.dcm           \n",
      "  inflating: dcm/1-113.dcm           \n",
      "  inflating: dcm/1-114.dcm           \n",
      "  inflating: dcm/1-115.dcm           \n",
      "  inflating: dcm/1-116.dcm           \n",
      "  inflating: dcm/1-117.dcm           \n",
      "  inflating: dcm/1-118.dcm           \n",
      "  inflating: dcm/1-119.dcm           \n",
      "  inflating: dcm/1-120.dcm           \n",
      "  inflating: dcm/1-121.dcm           \n",
      "  inflating: dcm/1-122.dcm           \n",
      "  inflating: dcm/1-123.dcm           \n",
      "  inflating: dcm/1-124.dcm           \n",
      "  inflating: dcm/1-125.dcm           \n",
      "  inflating: dcm/1-126.dcm           \n",
      "  inflating: dcm/1-127.dcm           \n",
      "  inflating: dcm/1-128.dcm           \n",
      "  inflating: dcm/1-129.dcm           \n",
      "  inflating: dcm/1-130.dcm           \n",
      "  inflating: dcm/1-131.dcm           \n",
      "  inflating: dcm/1-132.dcm           \n",
      "  inflating: dcm/1-133.dcm           \n",
      "  inflating: dcm/1-134.dcm           \n",
      "  inflating: dcm/1-135.dcm           \n",
      "  inflating: dcm/1-136.dcm           \n",
      "  inflating: dcm/1-137.dcm           \n",
      "  inflating: dcm/1-138.dcm           \n",
      "  inflating: dcm/1-139.dcm           \n",
      "  inflating: dcm/1-140.dcm           \n",
      "  inflating: dcm/1-141.dcm           \n",
      "  inflating: dcm/1-142.dcm           \n",
      "  inflating: dcm/1-143.dcm           \n",
      "  inflating: dcm/1-144.dcm           \n",
      "  inflating: dcm/1-145.dcm           \n",
      "  inflating: dcm/1-146.dcm           \n",
      "  inflating: dcm/1-147.dcm           \n",
      "  inflating: dcm/1-148.dcm           \n",
      "  inflating: dcm/1-149.dcm           \n",
      "  inflating: dcm/1-150.dcm           \n",
      "  inflating: dcm/1-151.dcm           \n",
      "  inflating: dcm/1-152.dcm           \n",
      "  inflating: dcm/1-153.dcm           \n",
      "  inflating: dcm/1-154.dcm           \n",
      "  inflating: dcm/1-155.dcm           \n",
      "  inflating: dcm/1-156.dcm           \n",
      "  inflating: dcm/1-157.dcm           \n",
      "  inflating: dcm/1-158.dcm           \n",
      "  inflating: dcm/1-159.dcm           \n",
      "  inflating: dcm/1-160.dcm           \n",
      "  inflating: dcm/1-161.dcm           \n",
      "  inflating: dcm/1-162.dcm           \n",
      "  inflating: dcm/1-163.dcm           \n",
      "  inflating: dcm/1-164.dcm           \n",
      "  inflating: dcm/1-165.dcm           \n",
      "  inflating: dcm/1-166.dcm           \n",
      "  inflating: dcm/1-167.dcm           \n",
      "  inflating: dcm/1-168.dcm           \n",
      "  inflating: dcm/1-169.dcm           \n",
      "  inflating: dcm/1-170.dcm           \n",
      "  inflating: dcm/1-171.dcm           \n",
      "  inflating: dcm/1-172.dcm           \n",
      "  inflating: dcm/1-173.dcm           \n",
      "  inflating: dcm/1-174.dcm           \n",
      "  inflating: dcm/1-175.dcm           \n",
      "  inflating: dcm/1-176.dcm           \n",
      "  inflating: dcm/1-177.dcm           \n",
      "  inflating: dcm/1-178.dcm           \n",
      "  inflating: dcm/1-179.dcm           \n",
      "  inflating: dcm/1-180.dcm           \n",
      "  inflating: dcm/1-181.dcm           \n",
      "  inflating: dcm/1-182.dcm           \n",
      "  inflating: dcm/1-183.dcm           \n",
      "  inflating: dcm/1-184.dcm           \n",
      "  inflating: dcm/1-185.dcm           \n",
      "  inflating: dcm/1-186.dcm           \n",
      "  inflating: dcm/1-187.dcm           \n",
      "  inflating: dcm/1-188.dcm           \n",
      "  inflating: dcm/1-189.dcm           \n",
      "  inflating: dcm/1-190.dcm           \n",
      "  inflating: dcm/1-191.dcm           \n",
      "  inflating: dcm/1-192.dcm           \n",
      "  inflating: dcm/1-193.dcm           \n",
      "  inflating: dcm/1-194.dcm           \n",
      "  inflating: dcm/1-195.dcm           \n",
      "  inflating: dcm/1-196.dcm           \n",
      "  inflating: dcm/1-197.dcm           \n",
      "  inflating: dcm/1-198.dcm           \n",
      "  inflating: dcm/1-199.dcm           \n",
      "  inflating: dcm/1-200.dcm           \n",
      "  inflating: dcm/1-201.dcm           \n",
      "  inflating: dcm/1-202.dcm           \n",
      "  inflating: dcm/1-203.dcm           \n",
      "  inflating: dcm/1-204.dcm           \n",
      "  inflating: multi_models/pancreas_ct_dints/model.ts  \n",
      "  inflating: multi_models/spleen_ct/model.ts  \n"
     ]
    }
   ],
   "source": [
    "# Download ai_spleen_bundle_data test data zip file. Please request access and download manually.\n",
    "# !pip install gdown\n",
    "# !gdown https://drive.google.com/file/d/1Iwx-jl7vBu67lMpHwJ2VueAOiTtJF4mL/view?usp=sharing\n",
    "\n",
    "# After downloading ai_spleen_bundle_data zip file from the web browser or using gdown,\n",
    "!unzip -o \"ai_multi_model_bundle_data.zip\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Set up environment variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "env: HOLOSCAN_INPUT_PATH=dcm\n",
      "env: HOLOSCAN_MODEL_PATH=multi_models\n",
      "env: HOLOSCAN_OUTPUT_PATH=output\n"
     ]
    }
   ],
   "source": [
    "models_folder = \"multi_models\"\n",
    "%env HOLOSCAN_INPUT_PATH dcm\n",
    "%env HOLOSCAN_MODEL_PATH {models_folder}\n",
    "%env HOLOSCAN_OUTPUT_PATH output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Set up imports\n",
    "\n",
    "Let's import necessary classes/decorators to define Application and Operator."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import logging\n",
    "from pathlib import Path\n",
    "\n",
    "# Required for setting SegmentDescription attributes. Direct import as this is not part of App SDK package.\n",
    "from pydicom.sr.codedict import codes\n",
    "\n",
    "from monai.deploy.conditions import CountCondition\n",
    "from monai.deploy.core import AppContext, Application\n",
    "from monai.deploy.core.domain import Image\n",
    "from monai.deploy.core.io_type import IOType\n",
    "from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator\n",
    "from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription\n",
    "from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator\n",
    "from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator\n",
    "from monai.deploy.operators.monai_bundle_inference_operator import (\n",
    "    BundleConfigNames,\n",
    "    IOMapping,\n",
    "    MonaiBundleInferenceOperator,\n",
    ")\n",
    "from monai.deploy.operators.stl_conversion_operator import STLConversionOperator\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Determining the Input and Output for the Model Bundle Inference Operator\n",
    "\n",
    "The App SDK provides a `MonaiBundleInferenceOperator` class to perform inference with a MONAI Bundle, which is essentially a PyTorch model in TorchScript with additional metadata describing the model network and processing specification. This operator uses the MONAI utilities to parse a MONAI Bundle to automatically instantiate the objects required for input and output processing as well as inference, as such it depends on MONAI transforms, inferers, and in turn their dependencies.\n",
    "\n",
    "Each Operator class inherits from the base `Operator` base class, and its input/output properties are specified in the `setup` function (as opposed to using decorators `@input`and `@output` in Version 0.5 and below).\n",
    "\n",
    "For the `MonaiBundleInferenceOperator` class, the input/output need to be defined to match those of the model network, both in name and data type. For the current release, an `IOMapping` object is used to connect the operator input/output to those of the model network by using the same names. This is likely to change, to be automated, in the future releases once certain limitation in the App SDK is removed.\n",
    "\n",
    "The Spleen CT Segmentation model network has a named input, called \"image\", and the named output called \"pred\", and both are of image type, which can all be mapped to the App SDK [Image](/modules/_autosummary/monai.deploy.core.domain.Image). This piece of information is typically acquired by examining the model metadata `network_data_format` attribute in the bundle, as seen in this [example] (https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/configs/metadata.json)."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Creating Application class\n",
    "\n",
    "Our application class would look like below.\n",
    "\n",
    "It defines `App` class, inheriting the base `Application` class.\n",
    "\n",
    "Objects required for DICOM parsing, series selection, pixel data conversion to volume image, model specific inference, and the AI result specific DICOM Segmentation object writers are created. The execution pipeline, as a Directed Acyclic Graph, is then constructed by connecting these objects through `self.add_flow()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class App(Application):\n",
    "    \"\"\"This example demonstrates how to create a multi-model/multi-AI application.\n",
    "\n",
    "    The important steps are:\n",
    "        1. Place the model TorchScripts in a defined folder structure, see below for details\n",
    "        2. Pass the model name to the inference operator instance in the app\n",
    "        3. Connect the input to and output from the inference operators, as required by the app\n",
    "\n",
    "    Required Model Folder Structure:\n",
    "        1. The model TorchScripts, be it MONAI Bundle compliant or not, must be placed in\n",
    "           a parent folder, whose path is used as the path to the model(s) on app execution\n",
    "        2. Each TorchScript file needs to be in a sub-folder, whose name is the model name\n",
    "\n",
    "    An example is shown below, where the `parent_foler` name can be the app's own choosing, and\n",
    "    the sub-folder names become model names, `pancreas_ct_dints` and `spleen_model`, respectively.\n",
    "\n",
    "        <parent_fodler>\n",
    "        ├── pancreas_ct_dints\n",
    "        │   └── model.ts\n",
    "        └── spleen_ct\n",
    "            └── model.ts\n",
    "\n",
    "    Note:\n",
    "    1. The TorchScript files of MONAI Bundles can be downloaded from MONAI Model Zoo, at\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation, v0.3.2\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation, v0.3.8\n",
    "    2. The input DICOM instances are from a DICOM Series of CT Abdomen, similar to the ones\n",
    "       used in the Spleen Segmentation example\n",
    "    3. This example is purely for technical demonstration, not for clinical use\n",
    "\n",
    "    Execution Time Estimate:\n",
    "      With a Nvidia GV100 32GB GPU, the execution time is around 87 seconds for an input DICOM series of 204 instances,\n",
    "      and 167 second for a series of 515 instances.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        \"\"\"Creates an application instance.\"\"\"\n",
    "        self._logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "        super().__init__(*args, **kwargs)\n",
    "\n",
    "    def run(self, *args, **kwargs):\n",
    "        # This method calls the base class to run. Can be omitted if simply calling through.\n",
    "        self._logger.info(f\"Begin {self.run.__name__}\")\n",
    "        super().run(*args, **kwargs)\n",
    "        self._logger.info(f\"End {self.run.__name__}\")\n",
    "\n",
    "    def compose(self):\n",
    "        \"\"\"Creates the app specific operators and chain them up in the processing DAG.\"\"\"\n",
    "\n",
    "        logging.info(f\"Begin {self.compose.__name__}\")\n",
    "\n",
    "        app_context = Application.init_app_context({})  # Do not pass argv in Jupyter Notebook\n",
    "        app_input_path = Path(app_context.input_path)\n",
    "        app_output_path = Path(app_context.output_path)\n",
    "\n",
    "        # Create the custom operator(s) as well as SDK built-in operator(s).\n",
    "        study_loader_op = DICOMDataLoaderOperator(\n",
    "            self, CountCondition(self, 1), input_folder=app_input_path, name=\"study_loader_op\"\n",
    "        )\n",
    "        series_selector_op = DICOMSeriesSelectorOperator(self, rules=Sample_Rules_Text, name=\"series_selector_op\")\n",
    "        series_to_vol_op = DICOMSeriesToVolumeOperator(self, name=\"series_to_vol_op\")\n",
    "\n",
    "        # Create the inference operator that supports MONAI Bundle and automates the inference.\n",
    "        # The IOMapping labels match the input and prediction keys in the pre and post processing.\n",
    "        # The model_name needs to be provided as this is a multi-model application and each inference\n",
    "        # operator need to rely on the name to access the named loaded model network.\n",
    "        # create an inference operator for each.\n",
    "        #\n",
    "        # Pertinent MONAI Bundle:\n",
    "        #   https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation, v0.3.2\n",
    "        #   https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation, v0.3.8\n",
    "\n",
    "        config_names = BundleConfigNames(config_names=[\"inference\"])  # Same as the default\n",
    "\n",
    "        # This is the inference operator for the spleen_model bundle. Note the model name.\n",
    "        bundle_spleen_seg_op = MonaiBundleInferenceOperator(\n",
    "            self,\n",
    "            input_mapping=[IOMapping(\"image\", Image, IOType.IN_MEMORY)],\n",
    "            output_mapping=[IOMapping(\"pred\", Image, IOType.IN_MEMORY)],\n",
    "            app_context=app_context,\n",
    "            bundle_config_names=config_names,\n",
    "            model_name=\"spleen_ct\",\n",
    "            name=\"bundle_spleen_seg_op\",\n",
    "        )\n",
    "\n",
    "        # This is the inference operator for the pancreas_ct_dints bundle. Note the model name.\n",
    "        bundle_pancreas_seg_op = MonaiBundleInferenceOperator(\n",
    "            self,\n",
    "            input_mapping=[IOMapping(\"image\", Image, IOType.IN_MEMORY)],\n",
    "            output_mapping=[IOMapping(\"pred\", Image, IOType.IN_MEMORY)],\n",
    "            app_context=app_context,\n",
    "            bundle_config_names=config_names,\n",
    "            model_name=\"pancreas_ct_dints\",\n",
    "            name=\"bundle_pancreas_seg_op\",\n",
    "        )\n",
    "\n",
    "        # Create DICOM Seg writer providing the required segment description for each segment with\n",
    "        # the actual algorithm and the pertinent organ/tissue. The segment_label, algorithm_name,\n",
    "        # and algorithm_version are of DICOM VR LO type, limited to 64 chars.\n",
    "        # https://dicom.nema.org/medical/dicom/current/output/chtml/part05/sect_6.2.html\n",
    "        #\n",
    "        # NOTE: Each generated DICOM Seg will be a dcm file with the name based on the SOP instance UID.\n",
    "\n",
    "        # Description for the Spleen seg, and the seg writer obj\n",
    "        seg_descriptions_spleen = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Spleen\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Spleen,\n",
    "                algorithm_name=\"volumetric (3D) segmentation of the spleen from CT image\",\n",
    "                algorithm_family=codes.DCM.ArtificialIntelligence,\n",
    "                algorithm_version=\"0.3.2\",\n",
    "            )\n",
    "        ]\n",
    "\n",
    "        custom_tags_spleen = {\"SeriesDescription\": \"AI Spleen Seg for research use only. Not for clinical use.\"}\n",
    "        dicom_seg_writer_spleen = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=seg_descriptions_spleen,\n",
    "            custom_tags=custom_tags_spleen,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dicom_seg_writer_spleen\",\n",
    "        )\n",
    "\n",
    "        # Description for the Pancreas seg, and the seg writer obj\n",
    "        _algorithm_name = \"Pancreas CT DiNTS segmentation from CT image\"\n",
    "        _algorithm_family = codes.DCM.ArtificialIntelligence\n",
    "        _algorithm_version = \"0.3.8\"\n",
    "\n",
    "        seg_descriptions_pancreas = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Pancreas\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Pancreas,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Tumor\",\n",
    "                segmented_property_category=codes.SCT.Tumor,\n",
    "                segmented_property_type=codes.SCT.Tumor,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "        ]\n",
    "        custom_tags_pancreas = {\"SeriesDescription\": \"AI Pancreas Seg for research use only. Not for clinical use.\"}\n",
    "\n",
    "        dicom_seg_writer_pancreas = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=seg_descriptions_pancreas,\n",
    "            custom_tags=custom_tags_pancreas,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dicom_seg_writer_pancreas\",\n",
    "        )\n",
    "\n",
    "        # NOTE: Sharp eyed readers can already see that the above instantiation of object can be simply parameterized.\n",
    "        #       Very true, but leaving them as if for easy reading. In fact the whole app can be parameterized for general use.\n",
    "\n",
    "        # Create the processing pipeline, by specifying the upstream and downstream operators, and\n",
    "        # ensuring the output from the former matches the input of the latter, in both name and type.\n",
    "        self.add_flow(study_loader_op, series_selector_op, {(\"dicom_study_list\", \"dicom_study_list\")})\n",
    "        self.add_flow(\n",
    "            series_selector_op, series_to_vol_op, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "\n",
    "        # Feed the input image to all inference operators\n",
    "        self.add_flow(series_to_vol_op, bundle_spleen_seg_op, {(\"image\", \"image\")})\n",
    "        # The Pancreas CT Seg bundle requires PyTorch 1.12.0 to avoid failure to load.\n",
    "        self.add_flow(series_to_vol_op, bundle_pancreas_seg_op, {(\"image\", \"image\")})\n",
    "\n",
    "        # Create DICOM Seg for one of the inference output\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op, dicom_seg_writer_spleen, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(bundle_spleen_seg_op, dicom_seg_writer_spleen, {(\"pred\", \"seg_image\")})\n",
    "\n",
    "        # Create DICOM Seg for one of the inference output\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op,\n",
    "            dicom_seg_writer_pancreas,\n",
    "            {(\"study_selected_series_list\", \"study_selected_series_list\")},\n",
    "        )\n",
    "        self.add_flow(bundle_pancreas_seg_op, dicom_seg_writer_pancreas, {(\"pred\", \"seg_image\")})\n",
    "\n",
    "        logging.info(f\"End {self.compose.__name__}\")\n",
    "\n",
    "\n",
    "# This is a sample series selection rule in JSON, simply selecting CT series.\n",
    "# If the study has more than 1 CT series, then all of them will be selected.\n",
    "# Please see more detail in DICOMSeriesSelectorOperator.\n",
    "Sample_Rules_Text = \"\"\"\n",
    "{\n",
    "    \"selections\": [\n",
    "        {\n",
    "            \"name\": \"CT Series\",\n",
    "            \"conditions\": {\n",
    "                \"StudyDescription\": \"(.*?)\",\n",
    "                \"Modality\": \"(?i)CT\",\n",
    "                \"SeriesDescription\": \"(.*?)\"\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\"\"\""
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Executing app locally\n",
    "\n",
    "We can execute the app in the Jupyter notebook. Note that the DICOM files of the CT Abdomen series must be present in the input folder, the models are already staged, and environment variables are set.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[info] [fragment.cpp:705] Loading extensions from configs...\n",
      "[2025-04-22 12:14:06,240] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=[])\n",
      "[2025-04-22 12:14:06,259] [INFO] (root) - AppContext object: AppContext(input_path=dcm, output_path=output, model_path=multi_models, workdir=), triton_server_netloc=\n",
      "[2025-04-22 12:14:06,266] [INFO] (root) - End compose\n",
      "[info] [gxf_executor.cpp:265] Creating context\n",
      "[info] [gxf_executor.cpp:2396] Activating Graph...\n",
      "[info] [gxf_executor.cpp:2426] Running Graph...\n",
      "[info] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "[info] [greedy_scheduler.cpp:191] Scheduling 7 entities\n",
      "[2025-04-22 12:14:06,293] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "[2025-04-22 12:14:06,864] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "[2025-04-22 12:14:06,865] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "  # of series: 1\n",
      "[2025-04-22 12:14:06,866] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:06,866] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "[2025-04-22 12:14:06,867] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "[2025-04-22 12:14:06,867] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:06,868] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "[2025-04-22 12:14:06,868] [INFO] (root) -     Series attribute Modality value: CT\n",
      "[2025-04-22 12:14:06,869] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:06,869] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "[2025-04-22 12:14:06,871] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 12:14:06,871] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:06,872] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:06,872] [INFO] (root) - Series Selection finalized.\n",
      "[2025-04-22 12:14:06,873] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 12:14:06,873] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:07,392] [INFO] (root) - Casting to float32\n",
      "[2025-04-22 12:14:07,618] [INFO] (root) - Parsing from bundle_path: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/pancreas_ct_dints/model.ts\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/monai/bundle/reference_resolver.py:216: UserWarning: Detected deprecated name 'optional_packages_version' in configuration file, replacing with 'required_packages_version'.\n",
      "  warnings.warn(\n",
      "[2025-04-22 12:14:45,024] [INFO] (root) - Parsing from bundle_path: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/spleen_ct/model.ts\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "  check_person_name(patient_name)\n",
      "[2025-04-22 12:14:48,476] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:48,477] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 12:14:48,478] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:48,478] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 12:14:48,479] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 12:14:48,480] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:48,480] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 12:14:48,481] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 12:14:48,482] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[2025-04-22 12:14:49,557] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:49,559] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 12:14:49,560] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:49,561] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 12:14:49,561] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 12:14:49,562] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:14:49,563] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 12:14:49,564] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 12:14:49,564] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[info] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "[info] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "[info] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "[info] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "[2025-04-22 12:14:49,692] [INFO] (__main__.App) - End run\n"
     ]
    }
   ],
   "source": [
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "app = App().run()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the application is verified inside Jupyter notebook, we can write the whole application as a file, adding the following lines:\n",
    "\n",
    "```python\n",
    "if __name__ == \"__main__\":\n",
    "    App().run()\n",
    "```\n",
    "\n",
    "The above lines are needed to execute the application code by using `python` interpreter.\n",
    "\n",
    "A `__main__.py` file should also be added, so the application folder structure would look like below:\n",
    "\n",
    "```bash\n",
    "my_app\n",
    "├── __main__.py\n",
    "└── app.py\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create an application folder\n",
    "!mkdir -p my_app && rm -rf my_app/*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### app.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/app.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/app.py\n",
    "# Copyright 2021-2023 MONAI Consortium\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "\n",
    "import logging\n",
    "from pathlib import Path\n",
    "\n",
    "# Required for setting SegmentDescription attributes. Direct import as this is not part of App SDK package.\n",
    "from pydicom.sr.codedict import codes\n",
    "\n",
    "from monai.deploy.conditions import CountCondition\n",
    "from monai.deploy.core import AppContext, Application\n",
    "from monai.deploy.core.domain import Image\n",
    "from monai.deploy.core.io_type import IOType\n",
    "from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator\n",
    "from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription\n",
    "from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator\n",
    "from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator\n",
    "from monai.deploy.operators.monai_bundle_inference_operator import (\n",
    "    BundleConfigNames,\n",
    "    IOMapping,\n",
    "    MonaiBundleInferenceOperator,\n",
    ")\n",
    "\n",
    "\n",
    "class App(Application):\n",
    "    \"\"\"This example demonstrates how to create a multi-model/multi-AI application.\n",
    "\n",
    "    The important steps are:\n",
    "        1. Place the model TorchScripts in a defined folder structure, see below for details\n",
    "        2. Pass the model name to the inference operator instance in the app\n",
    "        3. Connect the input to and output from the inference operators, as required by the app\n",
    "\n",
    "    Required Model Folder Structure:\n",
    "        1. The model TorchScripts, be it MONAI Bundle compliant or not, must be placed in\n",
    "           a parent folder, whose path is used as the path to the model(s) on app execution\n",
    "        2. Each TorchScript file needs to be in a sub-folder, whose name is the model name\n",
    "\n",
    "    An example is shown below, where the `parent_foler` name can be the app's own choosing, and\n",
    "    the sub-folder names become model names, `pancreas_ct_dints` and `spleen_model`, respectively.\n",
    "\n",
    "        <parent_fodler>\n",
    "        ├── pancreas_ct_dints\n",
    "        │   └── model.ts\n",
    "        └── spleen_ct\n",
    "            └── model.ts\n",
    "\n",
    "    Note:\n",
    "    1. The TorchScript files of MONAI Bundles can be downloaded from MONAI Model Zoo, at\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation, v0.3.2\n",
    "       https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation, v0.3.8\n",
    "    2. The input DICOM instances are from a DICOM Series of CT Abdomen, similar to the ones\n",
    "       used in the Spleen Segmentation example\n",
    "    3. This example is purely for technical demonstration, not for clinical use\n",
    "\n",
    "    Execution Time Estimate:\n",
    "      With a Nvidia GV100 32GB GPU, the execution time is around 87 seconds for an input DICOM series of 204 instances,\n",
    "      and 167 second for a series of 515 instances.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        \"\"\"Creates an application instance.\"\"\"\n",
    "        self._logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "        super().__init__(*args, **kwargs)\n",
    "\n",
    "    def run(self, *args, **kwargs):\n",
    "        # This method calls the base class to run. Can be omitted if simply calling through.\n",
    "        self._logger.info(f\"Begin {self.run.__name__}\")\n",
    "        super().run(*args, **kwargs)\n",
    "        self._logger.info(f\"End {self.run.__name__}\")\n",
    "\n",
    "    def compose(self):\n",
    "        \"\"\"Creates the app specific operators and chain them up in the processing DAG.\"\"\"\n",
    "\n",
    "        logging.info(f\"Begin {self.compose.__name__}\")\n",
    "\n",
    "        # Use Commandline options over environment variables to init context.\n",
    "        app_context = Application.init_app_context(self.argv)\n",
    "        app_input_path = Path(app_context.input_path)\n",
    "        app_output_path = Path(app_context.output_path)\n",
    "\n",
    "        # Create the custom operator(s) as well as SDK built-in operator(s).\n",
    "        study_loader_op = DICOMDataLoaderOperator(\n",
    "            self, CountCondition(self, 1), input_folder=app_input_path, name=\"study_loader_op\"\n",
    "        )\n",
    "        series_selector_op = DICOMSeriesSelectorOperator(self, rules=Sample_Rules_Text, name=\"series_selector_op\")\n",
    "        series_to_vol_op = DICOMSeriesToVolumeOperator(self, name=\"series_to_vol_op\")\n",
    "\n",
    "        # Create the inference operator that supports MONAI Bundle and automates the inference.\n",
    "        # The IOMapping labels match the input and prediction keys in the pre and post processing.\n",
    "        # The model_name needs to be provided as this is a multi-model application and each inference\n",
    "        # operator need to rely on the name to access the named loaded model network.\n",
    "        # create an inference operator for each.\n",
    "        #\n",
    "        # Pertinent MONAI Bundle:\n",
    "        #   https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation, v0.3.2\n",
    "        #   https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation, v0.3.8\n",
    "\n",
    "        config_names = BundleConfigNames(config_names=[\"inference\"])  # Same as the default\n",
    "\n",
    "        # This is the inference operator for the spleen_model bundle. Note the model name.\n",
    "        bundle_spleen_seg_op = MonaiBundleInferenceOperator(\n",
    "            self,\n",
    "            input_mapping=[IOMapping(\"image\", Image, IOType.IN_MEMORY)],\n",
    "            output_mapping=[IOMapping(\"pred\", Image, IOType.IN_MEMORY)],\n",
    "            app_context=app_context,\n",
    "            bundle_config_names=config_names,\n",
    "            model_name=\"spleen_ct\",\n",
    "            name=\"bundle_spleen_seg_op\",\n",
    "        )\n",
    "\n",
    "        # This is the inference operator for the pancreas_ct_dints bundle. Note the model name.\n",
    "        bundle_pancreas_seg_op = MonaiBundleInferenceOperator(\n",
    "            self,\n",
    "            input_mapping=[IOMapping(\"image\", Image, IOType.IN_MEMORY)],\n",
    "            output_mapping=[IOMapping(\"pred\", Image, IOType.IN_MEMORY)],\n",
    "            app_context=app_context,\n",
    "            bundle_config_names=config_names,\n",
    "            model_name=\"pancreas_ct_dints\",\n",
    "            name=\"bundle_pancreas_seg_op\",\n",
    "        )\n",
    "\n",
    "        # Create DICOM Seg writer providing the required segment description for each segment with\n",
    "        # the actual algorithm and the pertinent organ/tissue. The segment_label, algorithm_name,\n",
    "        # and algorithm_version are of DICOM VR LO type, limited to 64 chars.\n",
    "        # https://dicom.nema.org/medical/dicom/current/output/chtml/part05/sect_6.2.html\n",
    "        #\n",
    "        # NOTE: Each generated DICOM Seg will be a dcm file with the name based on the SOP instance UID.\n",
    "\n",
    "        # Description for the Spleen seg, and the seg writer obj\n",
    "        seg_descriptions_spleen = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Spleen\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Spleen,\n",
    "                algorithm_name=\"volumetric (3D) segmentation of the spleen from CT image\",\n",
    "                algorithm_family=codes.DCM.ArtificialIntelligence,\n",
    "                algorithm_version=\"0.3.2\",\n",
    "            )\n",
    "        ]\n",
    "\n",
    "        custom_tags_spleen = {\"SeriesDescription\": \"AI Spleen Seg for research use only. Not for clinical use.\"}\n",
    "        dicom_seg_writer_spleen = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=seg_descriptions_spleen,\n",
    "            custom_tags=custom_tags_spleen,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dicom_seg_writer_spleen\",\n",
    "        )\n",
    "\n",
    "        # Description for the Pancreas seg, and the seg writer obj\n",
    "        _algorithm_name = \"Pancreas CT DiNTS segmentation from CT image\"\n",
    "        _algorithm_family = codes.DCM.ArtificialIntelligence\n",
    "        _algorithm_version = \"0.3.8\"\n",
    "\n",
    "        seg_descriptions_pancreas = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Pancreas\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Pancreas,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Tumor\",\n",
    "                segmented_property_category=codes.SCT.Tumor,\n",
    "                segmented_property_type=codes.SCT.Tumor,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "        ]\n",
    "        custom_tags_pancreas = {\"SeriesDescription\": \"AI Pancreas Seg for research use only. Not for clinical use.\"}\n",
    "\n",
    "        dicom_seg_writer_pancreas = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=seg_descriptions_pancreas,\n",
    "            custom_tags=custom_tags_pancreas,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dicom_seg_writer_pancreas\",\n",
    "        )\n",
    "\n",
    "        # NOTE: Sharp eyed readers can already see that the above instantiation of object can be simply parameterized.\n",
    "        #       Very true, but leaving them as if for easy reading. In fact the whole app can be parameterized for general use.\n",
    "\n",
    "        # Create the processing pipeline, by specifying the upstream and downstream operators, and\n",
    "        # ensuring the output from the former matches the input of the latter, in both name and type.\n",
    "        self.add_flow(study_loader_op, series_selector_op, {(\"dicom_study_list\", \"dicom_study_list\")})\n",
    "        self.add_flow(\n",
    "            series_selector_op, series_to_vol_op, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "\n",
    "        # Feed the input image to all inference operators\n",
    "        self.add_flow(series_to_vol_op, bundle_spleen_seg_op, {(\"image\", \"image\")})\n",
    "        # The Pancreas CT Seg bundle requires PyTorch 1.12.0 to avoid failure to load.\n",
    "        self.add_flow(series_to_vol_op, bundle_pancreas_seg_op, {(\"image\", \"image\")})\n",
    "\n",
    "        # Create DICOM Seg for one of the inference output\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op, dicom_seg_writer_spleen, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(bundle_spleen_seg_op, dicom_seg_writer_spleen, {(\"pred\", \"seg_image\")})\n",
    "\n",
    "        # Create DICOM Seg for one of the inference output\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op,\n",
    "            dicom_seg_writer_pancreas,\n",
    "            {(\"study_selected_series_list\", \"study_selected_series_list\")},\n",
    "        )\n",
    "        self.add_flow(bundle_pancreas_seg_op, dicom_seg_writer_pancreas, {(\"pred\", \"seg_image\")})\n",
    "\n",
    "        logging.info(f\"End {self.compose.__name__}\")\n",
    "\n",
    "\n",
    "# This is a sample series selection rule in JSON, simply selecting CT series.\n",
    "# If the study has more than 1 CT series, then all of them will be selected.\n",
    "# Please see more detail in DICOMSeriesSelectorOperator.\n",
    "Sample_Rules_Text = \"\"\"\n",
    "{\n",
    "    \"selections\": [\n",
    "        {\n",
    "            \"name\": \"CT Series\",\n",
    "            \"conditions\": {\n",
    "                \"StudyDescription\": \"(.*?)\",\n",
    "                \"Modality\": \"(?i)CT\",\n",
    "                \"SeriesDescription\": \"(.*?)\"\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\"\"\"\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    logging.info(f\"Begin {__name__}\")\n",
    "    App().run()\n",
    "    logging.info(f\"End {__name__}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/__main__.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/__main__.py\n",
    "from app import App\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    App().run()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "app.py\t__main__.py\n"
     ]
    }
   ],
   "source": [
    "!ls my_app"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this time, let's execute the app on the command line. Note the required e.\n",
    "\n",
    ":::{note}\n",
    "Since the environment variables have been set with the specific input data and model paths from earlier steps, it is not necessary to provide the command line options on running the application.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\u001b[32minfo\u001b[m] [fragment.cpp:705] Loading extensions from configs...\n",
      "[2025-04-22 12:14:54,730] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=['my_app'])\n",
      "[2025-04-22 12:14:54,735] [INFO] (root) - AppContext object: AppContext(input_path=dcm, output_path=output, model_path=multi_models, workdir=), triton_server_netloc=\n",
      "[2025-04-22 12:14:54,737] [INFO] (root) - End compose\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:265] Creating context\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2396] Activating Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2426] Running Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:191] Scheduling 7 entities\n",
      "[2025-04-22 12:14:54,756] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "  # of series: 1\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "[2025-04-22 12:14:55,597] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) -     Series attribute Modality value: CT\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Series Selection finalized.\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 12:14:55,598] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 12:14:55,815] [INFO] (root) - Casting to float32\n",
      "[2025-04-22 12:14:55,872] [INFO] (root) - Parsing from bundle_path: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/pancreas_ct_dints/model.ts\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/monai/bundle/reference_resolver.py:216: UserWarning: Detected deprecated name 'optional_packages_version' in configuration file, replacing with 'required_packages_version'.\n",
      "  warnings.warn(\n",
      "[2025-04-22 12:15:29,019] [INFO] (root) - Parsing from bundle_path: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/spleen_ct/model.ts\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "  check_person_name(patient_name)\n",
      "[2025-04-22 12:15:32,361] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 12:15:32,362] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 12:15:32,363] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 12:15:33,346] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 12:15:33,347] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 12:15:33,347] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "[2025-04-22 12:15:33,435] [INFO] (app.App) - End run\n"
     ]
    }
   ],
   "source": [
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "!python my_app"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.2.826.0.1.3680043.10.511.3.34841928451888108286361340675987576.dcm\n",
      "1.2.826.0.1.3680043.10.511.3.36403385704959959901485544349934328.dcm\n"
     ]
    }
   ],
   "source": [
    "!ls $HOLOSCAN_OUTPUT_PATH"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Packaging app"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's package the app with [MONAI Application Packager](/developing_with_sdk/packaging_app).\n",
    "\n",
    "In this version of the App SDK, we need to write out the configuration yaml file as well as the package requirements file, in the application folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/app.yaml\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/app.yaml\n",
    "%YAML 1.2\n",
    "---\n",
    "application:\n",
    "  title: MONAI Deploy App Package - Multi Model App\n",
    "  version: 1.0\n",
    "  inputFormats: [\"file\"]\n",
    "  outputFormats: [\"file\"]\n",
    "\n",
    "resources:\n",
    "  cpu: 1\n",
    "  gpu: 1\n",
    "  memory: 1Gi\n",
    "  gpuMemory: 10Gi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/requirements.txt\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/requirements.txt\n",
    "highdicom>=0.18.2\n",
    "monai>=1.0\n",
    "nibabel>=3.2.1\n",
    "numpy>=1.21.6\n",
    "pydicom>=2.3.0\n",
    "setuptools>=59.5.0 # for pkg_resources\n",
    "SimpleITK>=2.0.0\n",
    "torch>=1.12.0\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can use the CLI package command to build the MONAI Application Package (MAP) container image based on a supported base image.\n",
    "\n",
    ":::{note}\n",
    "Building a MONAI Application Package (Docker image) can take time. Use `-l DEBUG` option to see the progress.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-04-22 12:15:35,532] [INFO] (common) - Downloading CLI manifest file...\n",
      "[2025-04-22 12:15:35,793] [DEBUG] (common) - Validating CLI manifest file...\n",
      "[2025-04-22 12:15:35,794] [INFO] (packager.parameters) - Application: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app\n",
      "[2025-04-22 12:15:35,794] [INFO] (packager.parameters) - Detected application type: Python Module\n",
      "[2025-04-22 12:15:35,794] [INFO] (packager) - Scanning for models in /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models...\n",
      "[2025-04-22 12:15:35,795] [DEBUG] (packager) - Model spleen_ct=/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/spleen_ct added.\n",
      "[2025-04-22 12:15:35,795] [DEBUG] (packager) - Model pancreas_ct_dints=/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/pancreas_ct_dints added.\n",
      "[2025-04-22 12:15:35,795] [INFO] (packager) - Reading application configuration from /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/app.yaml...\n",
      "[2025-04-22 12:15:35,798] [INFO] (packager) - Generating app.json...\n",
      "[2025-04-22 12:15:35,798] [INFO] (packager) - Generating pkg.json...\n",
      "[2025-04-22 12:15:35,804] [DEBUG] (common) - \n",
      "=============== Begin app.json ===============\n",
      "{\n",
      "    \"apiVersion\": \"1.0.0\",\n",
      "    \"command\": \"[\\\"python3\\\", \\\"/opt/holoscan/app\\\"]\",\n",
      "    \"environment\": {\n",
      "        \"HOLOSCAN_APPLICATION\": \"/opt/holoscan/app\",\n",
      "        \"HOLOSCAN_INPUT_PATH\": \"input/\",\n",
      "        \"HOLOSCAN_OUTPUT_PATH\": \"output/\",\n",
      "        \"HOLOSCAN_WORKDIR\": \"/var/holoscan\",\n",
      "        \"HOLOSCAN_MODEL_PATH\": \"/opt/holoscan/models\",\n",
      "        \"HOLOSCAN_CONFIG_PATH\": \"/var/holoscan/app.yaml\",\n",
      "        \"HOLOSCAN_APP_MANIFEST_PATH\": \"/etc/holoscan/app.json\",\n",
      "        \"HOLOSCAN_PKG_MANIFEST_PATH\": \"/etc/holoscan/pkg.json\",\n",
      "        \"HOLOSCAN_DOCS_PATH\": \"/opt/holoscan/docs\",\n",
      "        \"HOLOSCAN_LOGS_PATH\": \"/var/holoscan/logs\"\n",
      "    },\n",
      "    \"input\": {\n",
      "        \"path\": \"input/\",\n",
      "        \"formats\": null\n",
      "    },\n",
      "    \"liveness\": null,\n",
      "    \"output\": {\n",
      "        \"path\": \"output/\",\n",
      "        \"formats\": null\n",
      "    },\n",
      "    \"readiness\": null,\n",
      "    \"sdk\": \"monai-deploy\",\n",
      "    \"sdkVersion\": \"3.0.0\",\n",
      "    \"timeout\": 0,\n",
      "    \"version\": 1.0,\n",
      "    \"workingDirectory\": \"/var/holoscan\"\n",
      "}\n",
      "================ End app.json ================\n",
      "                 \n",
      "[2025-04-22 12:15:35,804] [DEBUG] (common) - \n",
      "=============== Begin pkg.json ===============\n",
      "{\n",
      "    \"apiVersion\": \"1.0.0\",\n",
      "    \"applicationRoot\": \"/opt/holoscan/app\",\n",
      "    \"modelRoot\": \"/opt/holoscan/models\",\n",
      "    \"models\": {\n",
      "        \"spleen_ct\": \"/opt/holoscan/models/spleen_ct\",\n",
      "        \"pancreas_ct_dints\": \"/opt/holoscan/models/pancreas_ct_dints\"\n",
      "    },\n",
      "    \"resources\": {\n",
      "        \"cpu\": 1,\n",
      "        \"gpu\": 1,\n",
      "        \"memory\": \"1Gi\",\n",
      "        \"gpuMemory\": \"10Gi\"\n",
      "    },\n",
      "    \"version\": 1.0,\n",
      "    \"platformConfig\": \"dgpu\"\n",
      "}\n",
      "================ End pkg.json ================\n",
      "                 \n",
      "[2025-04-22 12:15:36,273] [DEBUG] (packager.builder) - \n",
      "========== Begin Build Parameters ==========\n",
      "{'additional_lib_paths': '',\n",
      " 'app_config_file_path': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/app.yaml'),\n",
      " 'app_dir': PosixPath('/opt/holoscan/app'),\n",
      " 'app_json': '/etc/holoscan/app.json',\n",
      " 'application': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app'),\n",
      " 'application_directory': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app'),\n",
      " 'application_type': 'PythonModule',\n",
      " 'build_cache': PosixPath('/home/mqin/.holoscan_build_cache'),\n",
      " 'cmake_args': '',\n",
      " 'command': '[\"python3\", \"/opt/holoscan/app\"]',\n",
      " 'command_filename': 'my_app',\n",
      " 'config_file_path': PosixPath('/var/holoscan/app.yaml'),\n",
      " 'docs_dir': PosixPath('/opt/holoscan/docs'),\n",
      " 'full_input_path': PosixPath('/var/holoscan/input'),\n",
      " 'full_output_path': PosixPath('/var/holoscan/output'),\n",
      " 'gid': 1000,\n",
      " 'holoscan_sdk_version': '3.1.0',\n",
      " 'includes': [],\n",
      " 'input_dir': 'input/',\n",
      " 'lib_dir': PosixPath('/opt/holoscan/lib'),\n",
      " 'logs_dir': PosixPath('/var/holoscan/logs'),\n",
      " 'models': {'pancreas_ct_dints': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/pancreas_ct_dints'),\n",
      "            'spleen_ct': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/multi_models/spleen_ct')},\n",
      " 'models_dir': PosixPath('/opt/holoscan/models'),\n",
      " 'monai_deploy_app_sdk_version': '3.0.0',\n",
      " 'no_cache': False,\n",
      " 'output_dir': 'output/',\n",
      " 'pip_packages': None,\n",
      " 'pkg_json': '/etc/holoscan/pkg.json',\n",
      " 'requirements_file_path': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/requirements.txt'),\n",
      " 'sdk': <SdkType.MonaiDeploy: 'monai-deploy'>,\n",
      " 'sdk_type': 'monai-deploy',\n",
      " 'tarball_output': None,\n",
      " 'timeout': 0,\n",
      " 'title': 'MONAI Deploy App Package - Multi Model App',\n",
      " 'uid': 1000,\n",
      " 'username': 'holoscan',\n",
      " 'version': 1.0,\n",
      " 'working_dir': PosixPath('/var/holoscan')}\n",
      "=========== End Build Parameters ===========\n",
      "\n",
      "[2025-04-22 12:15:36,273] [DEBUG] (packager.builder) - \n",
      "========== Begin Platform Parameters ==========\n",
      "{'base_image': 'nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04',\n",
      " 'build_image': None,\n",
      " 'cuda_deb_arch': 'x86_64',\n",
      " 'custom_base_image': False,\n",
      " 'custom_holoscan_sdk': False,\n",
      " 'custom_monai_deploy_sdk': False,\n",
      " 'gpu_type': 'dgpu',\n",
      " 'holoscan_deb_arch': 'amd64',\n",
      " 'holoscan_sdk_file': '3.1.0',\n",
      " 'holoscan_sdk_filename': '3.1.0',\n",
      " 'monai_deploy_sdk_file': None,\n",
      " 'monai_deploy_sdk_filename': None,\n",
      " 'tag': 'my_app:1.0',\n",
      " 'target_arch': 'x86_64'}\n",
      "=========== End Platform Parameters ===========\n",
      "\n",
      "[2025-04-22 12:15:36,293] [DEBUG] (packager.builder) - \n",
      "========== Begin Dockerfile ==========\n",
      "\n",
      "ARG GPU_TYPE=dgpu\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "FROM nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04 AS base\n",
      "\n",
      "RUN apt-get update \\\n",
      "    && apt-get install -y --no-install-recommends --no-install-suggests \\\n",
      "        curl \\\n",
      "        jq \\\n",
      "    && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "# FROM base AS mofed-installer\n",
      "# ARG MOFED_VERSION=23.10-2.1.3.1\n",
      "\n",
      "# # In a container, we only need to install the user space libraries, though the drivers are still\n",
      "# # needed on the host.\n",
      "# # Note: MOFED's installation is not easily portable, so we can't copy the output of this stage\n",
      "# # to our final stage, but must inherit from it. For that reason, we keep track of the build/install\n",
      "# # only dependencies in the `MOFED_DEPS` variable (parsing the output of `--check-deps-only`) to\n",
      "# # remove them in that same layer, to ensure they are not propagated in the final image.\n",
      "# WORKDIR /opt/nvidia/mofed\n",
      "# ARG MOFED_INSTALL_FLAGS=\"--dpdk --with-mft --user-space-only --force --without-fw-update\"\n",
      "# RUN UBUNTU_VERSION=$(cat /etc/lsb-release | grep DISTRIB_RELEASE | cut -d= -f2) \\\n",
      "#     && OFED_PACKAGE=\"MLNX_OFED_LINUX-${MOFED_VERSION}-ubuntu${UBUNTU_VERSION}-$(uname -m)\" \\\n",
      "#     && curl -S -# -o ${OFED_PACKAGE}.tgz -L \\\n",
      "#         https://www.mellanox.com/downloads/ofed/MLNX_OFED-${MOFED_VERSION}/${OFED_PACKAGE}.tgz \\\n",
      "#     && tar xf ${OFED_PACKAGE}.tgz \\\n",
      "#     && MOFED_INSTALLER=$(find . -name mlnxofedinstall -type f -executable -print) \\\n",
      "#     && MOFED_DEPS=$(${MOFED_INSTALLER} ${MOFED_INSTALL_FLAGS} --check-deps-only 2>/dev/null | tail -n1 |  cut -d' ' -f3-) \\\n",
      "#     && apt-get update \\\n",
      "#     && apt-get install --no-install-recommends -y ${MOFED_DEPS} \\\n",
      "#     && ${MOFED_INSTALLER} ${MOFED_INSTALL_FLAGS} \\\n",
      "#     && rm -r * \\\n",
      "#     && apt-get remove -y ${MOFED_DEPS} && apt-get autoremove -y \\\n",
      "#     && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "FROM base AS release\n",
      "ENV DEBIAN_FRONTEND=noninteractive\n",
      "ENV TERM=xterm-256color\n",
      "\n",
      "ARG GPU_TYPE\n",
      "ARG UNAME\n",
      "ARG UID\n",
      "ARG GID\n",
      "\n",
      "RUN mkdir -p /etc/holoscan/ \\\n",
      "        && mkdir -p /opt/holoscan/ \\\n",
      "        && mkdir -p /var/holoscan \\\n",
      "        && mkdir -p /opt/holoscan/app \\\n",
      "        && mkdir -p /var/holoscan/input \\\n",
      "        && mkdir -p /var/holoscan/output\n",
      "\n",
      "LABEL base=\"nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\"\n",
      "LABEL tag=\"my_app:1.0\"\n",
      "LABEL org.opencontainers.image.title=\"MONAI Deploy App Package - Multi Model App\"\n",
      "LABEL org.opencontainers.image.version=\"1.0\"\n",
      "LABEL org.nvidia.holoscan=\"3.1.0\"\n",
      "\n",
      "LABEL org.monai.deploy.app-sdk=\"3.0.0\"\n",
      "\n",
      "ENV HOLOSCAN_INPUT_PATH=/var/holoscan/input\n",
      "ENV HOLOSCAN_OUTPUT_PATH=/var/holoscan/output\n",
      "ENV HOLOSCAN_WORKDIR=/var/holoscan\n",
      "ENV HOLOSCAN_APPLICATION=/opt/holoscan/app\n",
      "ENV HOLOSCAN_TIMEOUT=0\n",
      "ENV HOLOSCAN_MODEL_PATH=/opt/holoscan/models\n",
      "ENV HOLOSCAN_DOCS_PATH=/opt/holoscan/docs\n",
      "ENV HOLOSCAN_CONFIG_PATH=/var/holoscan/app.yaml\n",
      "ENV HOLOSCAN_APP_MANIFEST_PATH=/etc/holoscan/app.json\n",
      "ENV HOLOSCAN_PKG_MANIFEST_PATH=/etc/holoscan/pkg.json\n",
      "ENV HOLOSCAN_LOGS_PATH=/var/holoscan/logs\n",
      "ENV HOLOSCAN_VERSION=3.1.0\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "# If torch is installed, we can skip installing Python\n",
      "ENV PYTHON_VERSION=3.10.6-1~22.04\n",
      "ENV PYTHON_PIP_VERSION=22.0.2+dfsg-*\n",
      "\n",
      "RUN apt update \\\n",
      "    && apt-get install -y --no-install-recommends --no-install-suggests \\\n",
      "        python3-minimal=${PYTHON_VERSION} \\\n",
      "        libpython3-stdlib=${PYTHON_VERSION} \\\n",
      "        python3=${PYTHON_VERSION} \\\n",
      "        python3-venv=${PYTHON_VERSION} \\\n",
      "        python3-pip=${PYTHON_PIP_VERSION} \\\n",
      "    && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "RUN groupadd -f -g $GID $UNAME\n",
      "RUN useradd -rm -d /home/$UNAME -s /bin/bash -g $GID -G sudo -u $UID $UNAME\n",
      "RUN chown -R holoscan /var/holoscan && \\\n",
      "    chown -R holoscan /var/holoscan/input && \\\n",
      "    chown -R holoscan /var/holoscan/output\n",
      "\n",
      "# Set the working directory\n",
      "WORKDIR /var/holoscan\n",
      "\n",
      "# Copy HAP/MAP tool script\n",
      "COPY ./tools /var/holoscan/tools\n",
      "RUN chmod +x /var/holoscan/tools\n",
      "\n",
      "# Set the working directory\n",
      "WORKDIR /var/holoscan\n",
      "\n",
      "USER $UNAME\n",
      "\n",
      "ENV PATH=/home/${UNAME}/.local/bin:/opt/nvidia/holoscan/bin:$PATH\n",
      "ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/${UNAME}/.local/lib/python3.10/site-packages/holoscan/lib\n",
      "\n",
      "COPY ./pip/requirements.txt /tmp/requirements.txt\n",
      "\n",
      "RUN pip install --upgrade pip\n",
      "RUN pip install --no-cache-dir --user -r /tmp/requirements.txt\n",
      "\n",
      "\n",
      "# Install MONAI Deploy App SDK\n",
      "\n",
      "# Install MONAI Deploy from PyPI org\n",
      "RUN pip install monai-deploy-app-sdk==3.0.0\n",
      "\n",
      "\n",
      "COPY ./models  /opt/holoscan/models\n",
      "\n",
      "\n",
      "COPY ./map/app.json /etc/holoscan/app.json\n",
      "COPY ./app.config /var/holoscan/app.yaml\n",
      "COPY ./map/pkg.json /etc/holoscan/pkg.json\n",
      "\n",
      "COPY ./app /opt/holoscan/app\n",
      "\n",
      "\n",
      "ENTRYPOINT [\"/var/holoscan/tools\"]\n",
      "=========== End Dockerfile ===========\n",
      "\n",
      "[2025-04-22 12:15:36,294] [INFO] (packager.builder) - \n",
      "===============================================================================\n",
      "Building image for:                 x64-workstation\n",
      "    Architecture:                   linux/amd64\n",
      "    Base Image:                     nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "    Build Image:                    N/A\n",
      "    Cache:                          Enabled\n",
      "    Configuration:                  dgpu\n",
      "    Holoscan SDK Package:           3.1.0\n",
      "    MONAI Deploy App SDK Package:   N/A\n",
      "    gRPC Health Probe:              N/A\n",
      "    SDK Version:                    3.1.0\n",
      "    SDK:                            monai-deploy\n",
      "    Tag:                            my_app-x64-workstation-dgpu-linux-amd64:1.0\n",
      "    Included features/dependencies: N/A\n",
      "    \n",
      "[2025-04-22 12:15:36,708] [INFO] (common) - Using existing Docker BuildKit builder `holoscan_app_builder`\n",
      "[2025-04-22 12:15:36,708] [DEBUG] (packager.builder) - Building Holoscan Application Package: tag=my_app-x64-workstation-dgpu-linux-amd64:1.0\n",
      "#0 building with \"holoscan_app_builder\" instance using docker-container driver\n",
      "\n",
      "#1 [internal] load build definition from Dockerfile\n",
      "#1 transferring dockerfile: 4.55kB done\n",
      "#1 DONE 0.1s\n",
      "\n",
      "#2 [auth] nvidia/cuda:pull token for nvcr.io\n",
      "#2 DONE 0.0s\n",
      "\n",
      "#3 [internal] load metadata for nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#3 DONE 0.5s\n",
      "\n",
      "#4 [internal] load .dockerignore\n",
      "#4 transferring context: 1.80kB done\n",
      "#4 DONE 0.1s\n",
      "\n",
      "#5 importing cache manifest from nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#5 ...\n",
      "\n",
      "#6 [internal] load build context\n",
      "#6 DONE 0.0s\n",
      "\n",
      "#7 importing cache manifest from local:2851983977013277839\n",
      "#7 inferred cache manifest type: application/vnd.oci.image.index.v1+json done\n",
      "#7 DONE 0.0s\n",
      "\n",
      "#8 [base 1/2] FROM nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04@sha256:22fc009e5cea0b8b91d94c99fdd419d2366810b5ea835e47b8343bc15800c186\n",
      "#8 resolve nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04@sha256:22fc009e5cea0b8b91d94c99fdd419d2366810b5ea835e47b8343bc15800c186 0.0s done\n",
      "#8 DONE 0.1s\n",
      "\n",
      "#5 importing cache manifest from nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#5 inferred cache manifest type: application/vnd.docker.distribution.manifest.list.v2+json done\n",
      "#5 DONE 0.7s\n",
      "\n",
      "#6 [internal] load build context\n",
      "#6 transferring context: 635.92MB 3.7s done\n",
      "#6 DONE 3.7s\n",
      "\n",
      "#9 [release  7/18] COPY ./tools /var/holoscan/tools\n",
      "#9 CACHED\n",
      "\n",
      "#10 [base 2/2] RUN apt-get update     && apt-get install -y --no-install-recommends --no-install-suggests         curl         jq     && rm -rf /var/lib/apt/lists/*\n",
      "#10 CACHED\n",
      "\n",
      "#11 [release  8/18] RUN chmod +x /var/holoscan/tools\n",
      "#11 CACHED\n",
      "\n",
      "#12 [release  3/18] RUN groupadd -f -g 1000 holoscan\n",
      "#12 CACHED\n",
      "\n",
      "#13 [release  4/18] RUN useradd -rm -d /home/holoscan -s /bin/bash -g 1000 -G sudo -u 1000 holoscan\n",
      "#13 CACHED\n",
      "\n",
      "#14 [release  6/18] WORKDIR /var/holoscan\n",
      "#14 CACHED\n",
      "\n",
      "#15 [release  2/18] RUN apt update     && apt-get install -y --no-install-recommends --no-install-suggests         python3-minimal=3.10.6-1~22.04         libpython3-stdlib=3.10.6-1~22.04         python3=3.10.6-1~22.04         python3-venv=3.10.6-1~22.04         python3-pip=22.0.2+dfsg-*     && rm -rf /var/lib/apt/lists/*\n",
      "#15 CACHED\n",
      "\n",
      "#16 [release  1/18] RUN mkdir -p /etc/holoscan/         && mkdir -p /opt/holoscan/         && mkdir -p /var/holoscan         && mkdir -p /opt/holoscan/app         && mkdir -p /var/holoscan/input         && mkdir -p /var/holoscan/output\n",
      "#16 CACHED\n",
      "\n",
      "#17 [release  5/18] RUN chown -R holoscan /var/holoscan &&     chown -R holoscan /var/holoscan/input &&     chown -R holoscan /var/holoscan/output\n",
      "#17 CACHED\n",
      "\n",
      "#18 [release  9/18] WORKDIR /var/holoscan\n",
      "#18 CACHED\n",
      "\n",
      "#19 [release 10/18] COPY ./pip/requirements.txt /tmp/requirements.txt\n",
      "#19 DONE 4.0s\n",
      "\n",
      "#20 [release 11/18] RUN pip install --upgrade pip\n",
      "#20 0.851 Defaulting to user installation because normal site-packages is not writeable\n",
      "#20 0.897 Requirement already satisfied: pip in /usr/lib/python3/dist-packages (22.0.2)\n",
      "#20 1.075 Collecting pip\n",
      "#20 1.173   Downloading pip-25.0.1-py3-none-any.whl (1.8 MB)\n",
      "#20 1.340      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 11.4 MB/s eta 0:00:00\n",
      "#20 1.372 Installing collected packages: pip\n",
      "#20 2.121 Successfully installed pip-25.0.1\n",
      "#20 DONE 2.3s\n",
      "\n",
      "#21 [release 12/18] RUN pip install --no-cache-dir --user -r /tmp/requirements.txt\n",
      "#21 0.675 Collecting highdicom>=0.18.2 (from -r /tmp/requirements.txt (line 1))\n",
      "#21 0.728   Downloading highdicom-0.25.1-py3-none-any.whl.metadata (5.0 kB)\n",
      "#21 0.822 Collecting monai>=1.0 (from -r /tmp/requirements.txt (line 2))\n",
      "#21 0.835   Downloading monai-1.4.0-py3-none-any.whl.metadata (11 kB)\n",
      "#21 0.931 Collecting nibabel>=3.2.1 (from -r /tmp/requirements.txt (line 3))\n",
      "#21 0.961   Downloading nibabel-5.3.2-py3-none-any.whl.metadata (9.1 kB)\n",
      "#21 1.149 Collecting numpy>=1.21.6 (from -r /tmp/requirements.txt (line 4))\n",
      "#21 1.161   Downloading numpy-2.2.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)\n",
      "#21 1.211 Collecting pydicom>=2.3.0 (from -r /tmp/requirements.txt (line 5))\n",
      "#21 1.224   Downloading pydicom-3.0.1-py3-none-any.whl.metadata (9.4 kB)\n",
      "#21 1.233 Requirement already satisfied: setuptools>=59.5.0 in /usr/lib/python3/dist-packages (from -r /tmp/requirements.txt (line 6)) (59.6.0)\n",
      "#21 1.259 Collecting SimpleITK>=2.0.0 (from -r /tmp/requirements.txt (line 7))\n",
      "#21 1.272   Downloading SimpleITK-2.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.9 kB)\n",
      "#21 1.310 Collecting torch>=1.12.0 (from -r /tmp/requirements.txt (line 8))\n",
      "#21 1.323   Downloading torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl.metadata (28 kB)\n",
      "#21 1.489 Collecting pillow>=8.3 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.500   Downloading pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (8.9 kB)\n",
      "#21 1.605 Collecting pyjpegls>=1.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.619   Downloading pyjpegls-1.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 1.641 Collecting typing-extensions>=4.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.652   Downloading typing_extensions-4.13.2-py3-none-any.whl.metadata (3.0 kB)\n",
      "#21 1.670 Collecting numpy>=1.21.6 (from -r /tmp/requirements.txt (line 4))\n",
      "#21 1.681   Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)\n",
      "#21 1.746 Collecting importlib-resources>=5.12 (from nibabel>=3.2.1->-r /tmp/requirements.txt (line 3))\n",
      "#21 1.759   Downloading importlib_resources-6.5.2-py3-none-any.whl.metadata (3.9 kB)\n",
      "#21 1.817 Collecting packaging>=20 (from nibabel>=3.2.1->-r /tmp/requirements.txt (line 3))\n",
      "#21 1.828   Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB)\n",
      "#21 1.857 Collecting filelock (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.869   Downloading filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)\n",
      "#21 1.897 Collecting networkx (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.909   Downloading networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)\n",
      "#21 1.929 Collecting jinja2 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.940   Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)\n",
      "#21 1.966 Collecting fsspec (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.979   Downloading fsspec-2025.3.2-py3-none-any.whl.metadata (11 kB)\n",
      "#21 2.031 Collecting nvidia-cuda-nvrtc-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.044   Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.060 Collecting nvidia-cuda-runtime-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.073   Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.097 Collecting nvidia-cuda-cupti-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.111   Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.126 Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.140   Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.160 Collecting nvidia-cublas-cu12==12.4.5.8 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.212   Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.232 Collecting nvidia-cufft-cu12==11.2.1.3 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.245   Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.268 Collecting nvidia-curand-cu12==10.3.5.147 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.282   Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.298 Collecting nvidia-cusolver-cu12==11.6.1.9 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.312   Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.331 Collecting nvidia-cusparse-cu12==12.3.1.170 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.344   Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.359 Collecting nvidia-cusparselt-cu12==0.6.2 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.373   Downloading nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)\n",
      "#21 2.387 Collecting nvidia-nccl-cu12==2.21.5 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.399   Downloading nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)\n",
      "#21 2.416 Collecting nvidia-nvtx-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.429   Downloading nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.7 kB)\n",
      "#21 2.445 Collecting nvidia-nvjitlink-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.457   Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.483 Collecting triton==3.2.0 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.497   Downloading triton-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.4 kB)\n",
      "#21 2.525 Collecting sympy==1.13.1 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.537   Downloading sympy-1.13.1-py3-none-any.whl.metadata (12 kB)\n",
      "#21 2.566 Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.577   Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)\n",
      "#21 2.597 INFO: pip is looking at multiple versions of pyjpegls to determine which version is compatible with other requirements. This could take a while.\n",
      "#21 2.597 Collecting pyjpegls>=1.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 2.609   Downloading pyjpegls-1.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 2.622   Downloading pyjpegls-1.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 2.679 Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.690   Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)\n",
      "#21 2.717 Downloading highdicom-0.25.1-py3-none-any.whl (1.1 MB)\n",
      "#21 3.009    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 4.1 MB/s eta 0:00:00\n",
      "#21 3.027 Downloading monai-1.4.0-py3-none-any.whl (1.5 MB)\n",
      "#21 3.342    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 5.3 MB/s eta 0:00:00\n",
      "#21 3.356 Downloading nibabel-5.3.2-py3-none-any.whl (3.3 MB)\n",
      "#21 3.851    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 6.7 MB/s eta 0:00:00\n",
      "#21 3.867 Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)\n",
      "#21 5.943    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 8.8 MB/s eta 0:00:00\n",
      "#21 5.957 Downloading pydicom-3.0.1-py3-none-any.whl (2.4 MB)\n",
      "#21 6.217    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 9.5 MB/s eta 0:00:00\n",
      "#21 6.232 Downloading SimpleITK-2.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (52.4 MB)\n",
      "#21 16.76    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.4/52.4 MB 5.0 MB/s eta 0:00:00\n",
      "#21 16.77 Downloading torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl (766.7 MB)\n",
      "#21 30.05    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 766.7/766.7 MB 106.7 MB/s eta 0:00:00\n",
      "#21 30.07 Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl (363.4 MB)\n",
      "#21 33.33    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 363.4/363.4 MB 109.6 MB/s eta 0:00:00\n",
      "#21 33.34 Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (13.8 MB)\n",
      "#21 33.47    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.8/13.8 MB 113.6 MB/s eta 0:00:00\n",
      "#21 33.48 Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (24.6 MB)\n",
      "#21 33.70    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 113.8 MB/s eta 0:00:00\n",
      "#21 33.71 Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (883 kB)\n",
      "#21 33.72    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 883.7/883.7 kB 194.7 MB/s eta 0:00:00\n",
      "#21 33.74 Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB)\n",
      "#21 43.59    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 664.8/664.8 MB 77.9 MB/s eta 0:00:00\n",
      "#21 43.61 Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl (211.5 MB)\n",
      "#21 45.79    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 211.5/211.5 MB 96.9 MB/s eta 0:00:00\n",
      "#21 45.81 Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl (56.3 MB)\n",
      "#21 46.32    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 MB 109.8 MB/s eta 0:00:00\n",
      "#21 46.34 Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl (127.9 MB)\n",
      "#21 47.45    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 127.9/127.9 MB 116.1 MB/s eta 0:00:00\n",
      "#21 47.46 Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl (207.5 MB)\n",
      "#21 49.33    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 207.5/207.5 MB 111.0 MB/s eta 0:00:00\n",
      "#21 49.35 Downloading nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl (150.1 MB)\n",
      "#21 50.69    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 150.1/150.1 MB 112.2 MB/s eta 0:00:00\n",
      "#21 50.70 Downloading nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl (188.7 MB)\n",
      "#21 52.32    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 188.7/188.7 MB 117.0 MB/s eta 0:00:00\n",
      "#21 52.34 Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n",
      "#21 52.52    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.1/21.1 MB 117.9 MB/s eta 0:00:00\n",
      "#21 52.53 Downloading nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (99 kB)\n",
      "#21 52.54 Downloading sympy-1.13.1-py3-none-any.whl (6.2 MB)\n",
      "#21 52.60    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 122.9 MB/s eta 0:00:00\n",
      "#21 52.62 Downloading triton-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (253.1 MB)\n",
      "#21 55.13    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 253.1/253.1 MB 101.2 MB/s eta 0:00:00\n",
      "#21 55.14 Downloading importlib_resources-6.5.2-py3-none-any.whl (37 kB)\n",
      "#21 55.15 Downloading packaging-25.0-py3-none-any.whl (66 kB)\n",
      "#21 55.17 Downloading pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl (4.6 MB)\n",
      "#21 55.21    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 113.8 MB/s eta 0:00:00\n",
      "#21 55.30 Downloading pyjpegls-1.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.7 MB)\n",
      "#21 55.32    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7/2.7 MB 112.6 MB/s eta 0:00:00\n",
      "#21 55.34 Downloading typing_extensions-4.13.2-py3-none-any.whl (45 kB)\n",
      "#21 55.35 Downloading filelock-3.18.0-py3-none-any.whl (16 kB)\n",
      "#21 55.37 Downloading fsspec-2025.3.2-py3-none-any.whl (194 kB)\n",
      "#21 55.38 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)\n",
      "#21 55.40 Downloading networkx-3.4.2-py3-none-any.whl (1.7 MB)\n",
      "#21 55.42    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 115.7 MB/s eta 0:00:00\n",
      "#21 55.43 Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)\n",
      "#21 55.45 Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)\n",
      "#21 55.46    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 183.2 MB/s eta 0:00:00\n",
      "#21 63.66 Installing collected packages: triton, SimpleITK, nvidia-cusparselt-cu12, mpmath, typing-extensions, sympy, pydicom, pillow, packaging, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, MarkupSafe, importlib-resources, fsspec, filelock, pyjpegls, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nibabel, jinja2, nvidia-cusolver-cu12, highdicom, torch, monai\n",
      "#21 126.4 Successfully installed MarkupSafe-3.0.2 SimpleITK-2.4.1 filelock-3.18.0 fsspec-2025.3.2 highdicom-0.25.1 importlib-resources-6.5.2 jinja2-3.1.6 monai-1.4.0 mpmath-1.3.0 networkx-3.4.2 nibabel-5.3.2 numpy-1.26.4 nvidia-cublas-cu12-12.4.5.8 nvidia-cuda-cupti-cu12-12.4.127 nvidia-cuda-nvrtc-cu12-12.4.127 nvidia-cuda-runtime-cu12-12.4.127 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-cusparselt-cu12-0.6.2 nvidia-nccl-cu12-2.21.5 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.4.127 packaging-25.0 pillow-11.2.1 pydicom-3.0.1 pyjpegls-1.4.0 sympy-1.13.1 torch-2.6.0 triton-3.2.0 typing-extensions-4.13.2\n",
      "#21 DONE 127.8s\n",
      "\n",
      "#22 [release 13/18] RUN pip install monai-deploy-app-sdk==3.0.0\n",
      "#22 0.957 Defaulting to user installation because normal site-packages is not writeable\n",
      "#22 1.121 ERROR: Could not find a version that satisfies the requirement monai-deploy-app-sdk==3.0.0 (from versions: 0.1.0a2, 0.1.0rc1, 0.1.0rc2, 0.1.0rc3, 0.1.0, 0.1.1rc1, 0.1.1, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.5.1, 0.6.0, 1.0.0, 2.0.0)\n",
      "#22 1.240 ERROR: No matching distribution found for monai-deploy-app-sdk==3.0.0\n",
      "#22 ERROR: process \"/bin/sh -c pip install monai-deploy-app-sdk==3.0.0\" did not complete successfully: exit code: 1\n",
      "------\n",
      " > [release 13/18] RUN pip install monai-deploy-app-sdk==3.0.0:\n",
      "0.957 Defaulting to user installation because normal site-packages is not writeable\n",
      "1.121 ERROR: Could not find a version that satisfies the requirement monai-deploy-app-sdk==3.0.0 (from versions: 0.1.0a2, 0.1.0rc1, 0.1.0rc2, 0.1.0rc3, 0.1.0, 0.1.1rc1, 0.1.1, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.5.1, 0.6.0, 1.0.0, 2.0.0)\n",
      "1.240 ERROR: No matching distribution found for monai-deploy-app-sdk==3.0.0\n",
      "------\n",
      "Dockerfile:137\n",
      "--------------------\n",
      " 135 |     \n",
      " 136 |     # Install MONAI Deploy from PyPI org\n",
      " 137 | >>> RUN pip install monai-deploy-app-sdk==3.0.0\n",
      " 138 |     \n",
      " 139 |     \n",
      "--------------------\n",
      "ERROR: failed to solve: process \"/bin/sh -c pip install monai-deploy-app-sdk==3.0.0\" did not complete successfully: exit code: 1\n",
      "[2025-04-22 12:17:58,073] [INFO] (packager) - Build Summary:\n",
      "\n",
      "Platform: x64-workstation/dgpu\n",
      "    Status: Failure\n",
      "    Error:  Error building image: see Docker output for additional details.\n",
      "    \n"
     ]
    }
   ],
   "source": [
    "tag_prefix = \"my_app\"\n",
    "\n",
    "!monai-deploy package my_app -m {models_folder} -c my_app/app.yaml -t {tag_prefix}:1.0 --platform x86_64 -l DEBUG"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that the Docker image is created."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "my_app-x64-workstation-dgpu-linux-amd64                                       1.0                            aacceda07071   2 hours ago     9.25GB\n"
     ]
    }
   ],
   "source": [
    "!docker image ls | grep {tag_prefix}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can choose to display and inspect the MAP manifests by running the container with the `show` command.\n",
    "Furthermore, we can also extract the manifests and other contents in the MAP by using the `extract` command while mapping specific folder to the host's (we know that our MAP is compliant and supports these commands).\n",
    "\n",
    ":::{note}\n",
    "The host folder for storing the extracted content must first be created by the user, and if it has been created by Docker on running the container, the folder needs to be deleted and re-created.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Display manifests and extract MAP contents to the host folder, ./export\n",
      "\n",
      "============================== app.json ==============================\n",
      "{\n",
      "  \"apiVersion\": \"1.0.0\",\n",
      "  \"command\": \"[\\\"python3\\\", \\\"/opt/holoscan/app\\\"]\",\n",
      "  \"environment\": {\n",
      "    \"HOLOSCAN_APPLICATION\": \"/opt/holoscan/app\",\n",
      "    \"HOLOSCAN_INPUT_PATH\": \"input/\",\n",
      "    \"HOLOSCAN_OUTPUT_PATH\": \"output/\",\n",
      "    \"HOLOSCAN_WORKDIR\": \"/var/holoscan\",\n",
      "    \"HOLOSCAN_MODEL_PATH\": \"/opt/holoscan/models\",\n",
      "    \"HOLOSCAN_CONFIG_PATH\": \"/var/holoscan/app.yaml\",\n",
      "    \"HOLOSCAN_APP_MANIFEST_PATH\": \"/etc/holoscan/app.json\",\n",
      "    \"HOLOSCAN_PKG_MANIFEST_PATH\": \"/etc/holoscan/pkg.json\",\n",
      "    \"HOLOSCAN_DOCS_PATH\": \"/opt/holoscan/docs\",\n",
      "    \"HOLOSCAN_LOGS_PATH\": \"/var/holoscan/logs\"\n",
      "  },\n",
      "  \"input\": {\n",
      "    \"path\": \"input/\",\n",
      "    \"formats\": null\n",
      "  },\n",
      "  \"liveness\": null,\n",
      "  \"output\": {\n",
      "    \"path\": \"output/\",\n",
      "    \"formats\": null\n",
      "  },\n",
      "  \"readiness\": null,\n",
      "  \"sdk\": \"monai-deploy\",\n",
      "  \"sdkVersion\": \"0.5.1\",\n",
      "  \"timeout\": 0,\n",
      "  \"version\": 1,\n",
      "  \"workingDirectory\": \"/var/holoscan\"\n",
      "}\n",
      "\n",
      "============================== pkg.json ==============================\n",
      "{\n",
      "  \"apiVersion\": \"1.0.0\",\n",
      "  \"applicationRoot\": \"/opt/holoscan/app\",\n",
      "  \"modelRoot\": \"/opt/holoscan/models\",\n",
      "  \"models\": {\n",
      "    \"model\": \"/opt/holoscan/models/model\"\n",
      "  },\n",
      "  \"resources\": {\n",
      "    \"cpu\": 1,\n",
      "    \"gpu\": 1,\n",
      "    \"memory\": \"1Gi\",\n",
      "    \"gpuMemory\": \"6Gi\"\n",
      "  },\n",
      "  \"version\": 1,\n",
      "  \"platformConfig\": \"dgpu\"\n",
      "}\n",
      "\n",
      "2025-04-22 19:18:00 [INFO] Copying application from /opt/holoscan/app to /var/run/holoscan/export/app\n",
      "\n",
      "2025-04-22 19:18:00 [INFO] Copying application manifest file from /etc/holoscan/app.json to /var/run/holoscan/export/config/app.json\n",
      "2025-04-22 19:18:00 [INFO] Copying pkg manifest file from /etc/holoscan/pkg.json to /var/run/holoscan/export/config/pkg.json\n",
      "2025-04-22 19:18:00 [INFO] Copying application configuration from /var/holoscan/app.yaml to /var/run/holoscan/export/config/app.yaml\n",
      "\n",
      "2025-04-22 19:18:00 [INFO] Copying models from /opt/holoscan/models to /var/run/holoscan/export/models\n",
      "\n",
      "2025-04-22 19:18:00 [INFO] Copying documentation from /opt/holoscan/docs/ to /var/run/holoscan/export/docs\n",
      "2025-04-22 19:18:00 [INFO] '/opt/holoscan/docs/' cannot be found.\n",
      "\n",
      "app  config  models\n"
     ]
    }
   ],
   "source": [
    "!echo \"Display manifests and extract MAP contents to the host folder, ./export\"\n",
    "!docker run --rm {tag_prefix}-x64-workstation-dgpu-linux-amd64:1.0 show\n",
    "!rm -rf `pwd`/export && mkdir -p `pwd`/export\n",
    "!docker run --rm -v `pwd`/export/:/var/run/holoscan/export/ {tag_prefix}-x64-workstation-dgpu-linux-amd64:1.0 extract\n",
    "!ls `pwd`/export"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Executing packaged app locally\n",
    "\n",
    "The packaged app can be run locally through [MONAI Application Runner](/developing_with_sdk/executing_packaged_app_locally)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-04-22 12:18:02,444] [INFO] (runner) - Checking dependencies...\n",
      "[2025-04-22 12:18:02,444] [INFO] (runner) - --> Verifying if \"docker\" is installed...\n",
      "\n",
      "[2025-04-22 12:18:02,445] [INFO] (runner) - --> Verifying if \"docker-buildx\" is installed...\n",
      "\n",
      "[2025-04-22 12:18:02,445] [INFO] (runner) - --> Verifying if \"my_app-x64-workstation-dgpu-linux-amd64:1.0\" is available...\n",
      "\n",
      "[2025-04-22 12:18:02,523] [INFO] (runner) - Reading HAP/MAP manifest...\n",
      "Successfully copied 2.56kB to /tmp/tmprw2gvfwr/app.json\n",
      "Successfully copied 2.05kB to /tmp/tmprw2gvfwr/pkg.json\n",
      "991136f12d4255c8e8f7bdbf80acfad80770e774a5441551832ddc3d52c5c4cf\n",
      "[2025-04-22 12:18:02,786] [INFO] (runner) - --> Verifying if \"nvidia-ctk\" is installed...\n",
      "\n",
      "[2025-04-22 12:18:02,787] [INFO] (runner) - --> Verifying \"nvidia-ctk\" version...\n",
      "\n",
      "[2025-04-22 12:18:03,056] [INFO] (common) - Launching container (4ba4a525283c) using image 'my_app-x64-workstation-dgpu-linux-amd64:1.0'...\n",
      "    container name:      zealous_mclaren\n",
      "    host name:           mingq-dt\n",
      "    network:             host\n",
      "    user:                1000:1000\n",
      "    ulimits:             memlock=-1:-1, stack=67108864:67108864\n",
      "    cap_add:             CAP_SYS_PTRACE\n",
      "    ipc mode:            host\n",
      "    shared memory size:  67108864\n",
      "    devices:             \n",
      "    group_add:           44\n",
      "2025-04-22 19:18:03 [INFO] Launching application python3 /opt/holoscan/app ...\n",
      "\n",
      "[info] [fragment.cpp:705] Loading extensions from configs...\n",
      "\n",
      "[info] [gxf_executor.cpp:265] Creating context\n",
      "\n",
      "[2025-04-22 19:18:11,324] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=['/opt/holoscan/app'])\n",
      "\n",
      "[2025-04-22 19:18:11,326] [INFO] (root) - AppContext object: AppContext(input_path=/var/holoscan/input, output_path=/var/holoscan/output, model_path=/opt/holoscan/models, workdir=/var/holoscan), triton_server_netloc=\n",
      "\n",
      "[2025-04-22 19:18:11,329] [INFO] (root) - End compose\n",
      "\n",
      "[info] [gxf_executor.cpp:2396] Activating Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2426] Running Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "\n",
      "[info] [greedy_scheduler.cpp:191] Scheduling 6 entities\n",
      "\n",
      "[2025-04-22 19:18:11,356] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "\n",
      "[2025-04-22 19:18:12,402] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "\n",
      "  # of series: 1\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) -     Series attribute Modality value: CT\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series Selection finalized.\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "\n",
      "[2025-04-22 19:18:12,403] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 19:18:12,611] [INFO] (root) - Casting to float32\n",
      "\n",
      "[2025-04-22 19:18:12,667] [INFO] (root) - Parsing from bundle_path: /opt/holoscan/models/model/model.ts\n",
      "\n",
      "/home/holoscan/.local/lib/python3.10/site-packages/monai/bundle/reference_resolver.py:216: UserWarning: Detected deprecated name 'optional_packages_version' in configuration file, replacing with 'required_packages_version'.\n",
      "\n",
      "  warnings.warn(\n",
      "\n",
      "[2025-04-22 19:18:16,253] [INFO] (monai.deploy.operators.stl_conversion_operator.STLConversionOperator) - Output will be saved in file /var/holoscan/output/stl/spleen.stl.\n",
      "\n",
      "[2025-04-22 19:18:17,650] [INFO] (monai.deploy.operators.stl_conversion_operator.SpatialImage) - 3D image\n",
      "\n",
      "[2025-04-22 19:18:17,650] [INFO] (monai.deploy.operators.stl_conversion_operator.STLConverter) - Image ndarray shape:(204, 512, 512)\n",
      "\n",
      "/home/holoscan/.local/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "\n",
      "  check_person_name(patient_name)\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 19:18:28,324] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "\n",
      "[2025-04-22 19:18:28,325] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "\n",
      "[2025-04-22 19:18:28,325] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "\n",
      "[info] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "\n",
      "[info] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "\n",
      "[info] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "\n",
      "[2025-04-22 19:18:28,421] [INFO] (app.AISpleenSegApp) - End run\n",
      "\n",
      "[2025-04-22 12:18:29,792] [INFO] (common) - Container 'zealous_mclaren'(4ba4a525283c) exited.\n"
     ]
    }
   ],
   "source": [
    "# Clear the output folder and run the MAP. The input is expected to be a folder.\n",
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "!monai-deploy run -i $HOLOSCAN_INPUT_PATH -o $HOLOSCAN_OUTPUT_PATH my_app-x64-workstation-dgpu-linux-amd64:1.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the output folder are the DICOM segmentation files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.2.826.0.1.3680043.10.511.3.11413742162001654228707576103547421.dcm  stl\n"
     ]
    }
   ],
   "source": [
    "!ls $HOLOSCAN_OUTPUT_PATH"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
