{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Creating a Segmentation App with MONAI Deploy App SDK\n",
    "\n",
    "This tutorial shows how to create an organ segmentation application for a PyTorch model that has been trained with MONAI. Please note that this one does not require the model be a MONAI Bundle.\n",
    "\n",
    "Deploying AI models requires the integration with clinical imaging network, even if just in a for-research-use setting. This means that the AI deploy application will need to support standards-based imaging protocols, and specifically for Radiological imaging, DICOM protocol.\n",
    "\n",
    "Typically, DICOM network communication, either in DICOM TCP/IP network protocol or DICOMWeb, would be handled by DICOM devices or services, e.g. MONAI Deploy Informatics Gateway, so the deploy application itself would only need to use DICOM Part 10 files as input and save the AI result in DICOM Part10 file(s). For segmentation use cases, the DICOM instance file for AI results could be a DICOM Segmentation object or a DICOM RT Structure Set, and for classification, DICOM Structure Report and/or DICOM Encapsulated PDF.\n",
    "\n",
    "During model training, input and label images are typically in non-DICOM volumetric image format, e.g., NIfTI and PNG, converted from a specific DICOM study series. Furthermore, the voxel spacings most likely have been re-sampled to be uniform for all images. When integrated with imaging networks and receiving DICOM instances from modalities and Picture Archiving and Communications System, PACS, an AI deploy application has to deal with a whole DICOM study with multiple series, whose images' spacing may not be the same as expected by the trained model. To address these cases consistently and efficiently, MONAI Deploy Application SDK provides classes, called operators, to parse DICOM studies, select specific series with application-defined rules, and convert the selected DICOM series into domain-specific image format along with meta-data representing the pertinent DICOM attributes. The image is then further processed in the pre-processing stage to normalize spacing, orientation, intensity, etc., before pixel data as Tensors are used for inference.\n",
    "\n",
    "In the following sections, we will demonstrate how to create a MONAI Deploy application package using the MONAI Deploy App SDK.\n",
    "\n",
    ":::{note}\n",
    "For local testing, if there is a lack of DICOM Part 10 files, one can use open source programs, e.g. 3D Slicer, to convert NIfTI to DICOM files.\n",
    "\n",
    ":::\n",
    "\n",
    "## Creating Operators and connecting them in Application class\n",
    "\n",
    "We will implement an application that consists of five Operators:\n",
    "\n",
    "- **DICOMDataLoaderOperator**:\n",
    "    - **Input(dicom_files)**: a folder path (`Path`)\n",
    "    - **Output(dicom_study_list)**: a list of DICOM studies in memory (List[[`DICOMStudy`](/modules/_autosummary/monai.deploy.core.domain.DICOMStudy)])\n",
    "- **DICOMSeriesSelectorOperator**:\n",
    "    - **Input(dicom_study_list)**: a list of DICOM studies in memory (List[[`DICOMStudy`](/modules/_autosummary/monai.deploy.core.domain.DICOMStudy)])\n",
    "    - **Input(selection_rules)**: a selection rule (Dict)\n",
    "    - **Output(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "- **DICOMSeriesToVolumeOperator**:\n",
    "    - **Input(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "    - **Output(image)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "- **SpleenSegOperator**:\n",
    "    - **Input(image)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "    - **Output(seg_image)**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "- **DICOMSegmentationWriterOperator**:\n",
    "    - **Input(seg_image)**: a segmentation image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))\n",
    "    - **Input(study_selected_series_list)**: a DICOM series object in memory ([`StudySelectedSeries`](/modules/_autosummary/monai.deploy.core.domain.StudySelectedSeries))\n",
    "    - **Output(dicom_seg_instance)**: a file path (`Path`)\n",
    "\n",
    "\n",
    ":::{note}\n",
    "The `DICOMSegmentationWriterOperator` needs both the segmentation image as well as the original DICOM series meta-data in order to use the patient demographics and the DICOM Study level attributes.\n",
    ":::\n",
    "\n",
    "The workflow of the application would look like this.\n",
    "\n",
    "```{mermaid}\n",
    "%%{init: {\"theme\": \"base\", \"themeVariables\": { \"fontSize\": \"16px\"}} }%%\n",
    "\n",
    "classDiagram\n",
    "    direction TB\n",
    "\n",
    "    DICOMDataLoaderOperator --|> DICOMSeriesSelectorOperator : dicom_study_list...dicom_study_list\n",
    "    DICOMSeriesSelectorOperator --|> DICOMSeriesToVolumeOperator : study_selected_series_list...study_selected_series_list\n",
    "    DICOMSeriesToVolumeOperator --|> SpleenSegOperator : image...image\n",
    "    DICOMSeriesSelectorOperator --|> DICOMSegmentationWriterOperator : study_selected_series_list...study_selected_series_list\n",
    "    SpleenSegOperator --|> DICOMSegmentationWriterOperator : seg_image...seg_image\n",
    "\n",
    "\n",
    "    class DICOMDataLoaderOperator {\n",
    "        <in>dicom_files : DISK\n",
    "        dicom_study_list(out) IN_MEMORY\n",
    "    }\n",
    "    class DICOMSeriesSelectorOperator {\n",
    "        <in>dicom_study_list : IN_MEMORY\n",
    "        <in>selection_rules : IN_MEMORY\n",
    "        study_selected_series_list(out) IN_MEMORY\n",
    "    }\n",
    "    class DICOMSeriesToVolumeOperator {\n",
    "        <in>study_selected_series_list : IN_MEMORY\n",
    "        image(out) IN_MEMORY\n",
    "    }\n",
    "    class SpleenSegOperator {\n",
    "        <in>image : IN_MEMORY\n",
    "        seg_image(out) IN_MEMORY\n",
    "    }\n",
    "    class DICOMSegmentationWriterOperator {\n",
    "        <in>seg_image : IN_MEMORY\n",
    "        <in>study_selected_series_list : IN_MEMORY\n",
    "        dicom_seg_instance(out) DISK\n",
    "    }\n",
    "```\n",
    "\n",
    "### Setup environment\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install MONAI and other necessary image processing packages for the application\n",
    "!python -c \"import monai\" || pip install --upgrade -q \"monai\"\n",
    "!python -c \"import torch\" || pip install -q \"torch>=1.10.2\"\n",
    "!python -c \"import numpy\" || pip install -q \"numpy>=1.21\"\n",
    "!python -c \"import nibabel\" || pip install -q \"nibabel>=3.2.1\"\n",
    "!python -c \"import pydicom\" || pip install -q \"pydicom>=1.4.2\"\n",
    "!python -c \"import highdicom\" || pip install -q \"highdicom>=0.18.2\"\n",
    "!python -c \"import SimpleITK\" || pip install -q \"SimpleITK>=2.0.0\"\n",
    "\n",
    "# Install MONAI Deploy App SDK package\n",
    "!python -c \"import monai.deploy\" || pip install --upgrade \"monai-deploy-app-sdk\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: you may need to restart the Jupyter kernel to use the updated packages."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download/Extract ai_spleen_bundle_data from Google Drive\n",
    "\n",
    "**_Note:_** Data files are now access controlled. Please first request permission to access the [shared folder on Google Drive](https://drive.google.com/drive/folders/1EONJsrwbGsS30td0hs8zl4WKjihew1Z3?usp=sharing). Please download zip file, `ai_spleen_seg_bundle_data.zip` in the `ai_spleen_seg_app` folder, to the same folder as the notebook example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Archive:  ai_spleen_seg_bundle_data.zip\n",
      "  inflating: dcm/1-001.dcm           \n",
      "  inflating: dcm/1-002.dcm           \n",
      "  inflating: dcm/1-003.dcm           \n",
      "  inflating: dcm/1-004.dcm           \n",
      "  inflating: dcm/1-005.dcm           \n",
      "  inflating: dcm/1-006.dcm           \n",
      "  inflating: dcm/1-007.dcm           \n",
      "  inflating: dcm/1-008.dcm           \n",
      "  inflating: dcm/1-009.dcm           \n",
      "  inflating: dcm/1-010.dcm           \n",
      "  inflating: dcm/1-011.dcm           \n",
      "  inflating: dcm/1-012.dcm           \n",
      "  inflating: dcm/1-013.dcm           \n",
      "  inflating: dcm/1-014.dcm           \n",
      "  inflating: dcm/1-015.dcm           \n",
      "  inflating: dcm/1-016.dcm           \n",
      "  inflating: dcm/1-017.dcm           \n",
      "  inflating: dcm/1-018.dcm           \n",
      "  inflating: dcm/1-019.dcm           \n",
      "  inflating: dcm/1-020.dcm           \n",
      "  inflating: dcm/1-021.dcm           \n",
      "  inflating: dcm/1-022.dcm           \n",
      "  inflating: dcm/1-023.dcm           \n",
      "  inflating: dcm/1-024.dcm           \n",
      "  inflating: dcm/1-025.dcm           \n",
      "  inflating: dcm/1-026.dcm           \n",
      "  inflating: dcm/1-027.dcm           \n",
      "  inflating: dcm/1-028.dcm           \n",
      "  inflating: dcm/1-029.dcm           \n",
      "  inflating: dcm/1-030.dcm           \n",
      "  inflating: dcm/1-031.dcm           \n",
      "  inflating: dcm/1-032.dcm           \n",
      "  inflating: dcm/1-033.dcm           \n",
      "  inflating: dcm/1-034.dcm           \n",
      "  inflating: dcm/1-035.dcm           \n",
      "  inflating: dcm/1-036.dcm           \n",
      "  inflating: dcm/1-037.dcm           \n",
      "  inflating: dcm/1-038.dcm           \n",
      "  inflating: dcm/1-039.dcm           \n",
      "  inflating: dcm/1-040.dcm           \n",
      "  inflating: dcm/1-041.dcm           \n",
      "  inflating: dcm/1-042.dcm           \n",
      "  inflating: dcm/1-043.dcm           \n",
      "  inflating: dcm/1-044.dcm           \n",
      "  inflating: dcm/1-045.dcm           \n",
      "  inflating: dcm/1-046.dcm           \n",
      "  inflating: dcm/1-047.dcm           \n",
      "  inflating: dcm/1-048.dcm           \n",
      "  inflating: dcm/1-049.dcm           \n",
      "  inflating: dcm/1-050.dcm           \n",
      "  inflating: dcm/1-051.dcm           \n",
      "  inflating: dcm/1-052.dcm           \n",
      "  inflating: dcm/1-053.dcm           \n",
      "  inflating: dcm/1-054.dcm           \n",
      "  inflating: dcm/1-055.dcm           \n",
      "  inflating: dcm/1-056.dcm           \n",
      "  inflating: dcm/1-057.dcm           \n",
      "  inflating: dcm/1-058.dcm           \n",
      "  inflating: dcm/1-059.dcm           \n",
      "  inflating: dcm/1-060.dcm           \n",
      "  inflating: dcm/1-061.dcm           \n",
      "  inflating: dcm/1-062.dcm           \n",
      "  inflating: dcm/1-063.dcm           \n",
      "  inflating: dcm/1-064.dcm           \n",
      "  inflating: dcm/1-065.dcm           \n",
      "  inflating: dcm/1-066.dcm           \n",
      "  inflating: dcm/1-067.dcm           \n",
      "  inflating: dcm/1-068.dcm           \n",
      "  inflating: dcm/1-069.dcm           \n",
      "  inflating: dcm/1-070.dcm           \n",
      "  inflating: dcm/1-071.dcm           \n",
      "  inflating: dcm/1-072.dcm           \n",
      "  inflating: dcm/1-073.dcm           \n",
      "  inflating: dcm/1-074.dcm           \n",
      "  inflating: dcm/1-075.dcm           \n",
      "  inflating: dcm/1-076.dcm           \n",
      "  inflating: dcm/1-077.dcm           \n",
      "  inflating: dcm/1-078.dcm           \n",
      "  inflating: dcm/1-079.dcm           \n",
      "  inflating: dcm/1-080.dcm           \n",
      "  inflating: dcm/1-081.dcm           \n",
      "  inflating: dcm/1-082.dcm           \n",
      "  inflating: dcm/1-083.dcm           \n",
      "  inflating: dcm/1-084.dcm           \n",
      "  inflating: dcm/1-085.dcm           \n",
      "  inflating: dcm/1-086.dcm           \n",
      "  inflating: dcm/1-087.dcm           \n",
      "  inflating: dcm/1-088.dcm           \n",
      "  inflating: dcm/1-089.dcm           \n",
      "  inflating: dcm/1-090.dcm           \n",
      "  inflating: dcm/1-091.dcm           \n",
      "  inflating: dcm/1-092.dcm           \n",
      "  inflating: dcm/1-093.dcm           \n",
      "  inflating: dcm/1-094.dcm           \n",
      "  inflating: dcm/1-095.dcm           \n",
      "  inflating: dcm/1-096.dcm           \n",
      "  inflating: dcm/1-097.dcm           \n",
      "  inflating: dcm/1-098.dcm           \n",
      "  inflating: dcm/1-099.dcm           \n",
      "  inflating: dcm/1-100.dcm           \n",
      "  inflating: dcm/1-101.dcm           \n",
      "  inflating: dcm/1-102.dcm           \n",
      "  inflating: dcm/1-103.dcm           \n",
      "  inflating: dcm/1-104.dcm           \n",
      "  inflating: dcm/1-105.dcm           \n",
      "  inflating: dcm/1-106.dcm           \n",
      "  inflating: dcm/1-107.dcm           \n",
      "  inflating: dcm/1-108.dcm           \n",
      "  inflating: dcm/1-109.dcm           \n",
      "  inflating: dcm/1-110.dcm           \n",
      "  inflating: dcm/1-111.dcm           \n",
      "  inflating: dcm/1-112.dcm           \n",
      "  inflating: dcm/1-113.dcm           \n",
      "  inflating: dcm/1-114.dcm           \n",
      "  inflating: dcm/1-115.dcm           \n",
      "  inflating: dcm/1-116.dcm           \n",
      "  inflating: dcm/1-117.dcm           \n",
      "  inflating: dcm/1-118.dcm           \n",
      "  inflating: dcm/1-119.dcm           \n",
      "  inflating: dcm/1-120.dcm           \n",
      "  inflating: dcm/1-121.dcm           \n",
      "  inflating: dcm/1-122.dcm           \n",
      "  inflating: dcm/1-123.dcm           \n",
      "  inflating: dcm/1-124.dcm           \n",
      "  inflating: dcm/1-125.dcm           \n",
      "  inflating: dcm/1-126.dcm           \n",
      "  inflating: dcm/1-127.dcm           \n",
      "  inflating: dcm/1-128.dcm           \n",
      "  inflating: dcm/1-129.dcm           \n",
      "  inflating: dcm/1-130.dcm           \n",
      "  inflating: dcm/1-131.dcm           \n",
      "  inflating: dcm/1-132.dcm           \n",
      "  inflating: dcm/1-133.dcm           \n",
      "  inflating: dcm/1-134.dcm           \n",
      "  inflating: dcm/1-135.dcm           \n",
      "  inflating: dcm/1-136.dcm           \n",
      "  inflating: dcm/1-137.dcm           \n",
      "  inflating: dcm/1-138.dcm           \n",
      "  inflating: dcm/1-139.dcm           \n",
      "  inflating: dcm/1-140.dcm           \n",
      "  inflating: dcm/1-141.dcm           \n",
      "  inflating: dcm/1-142.dcm           \n",
      "  inflating: dcm/1-143.dcm           \n",
      "  inflating: dcm/1-144.dcm           \n",
      "  inflating: dcm/1-145.dcm           \n",
      "  inflating: dcm/1-146.dcm           \n",
      "  inflating: dcm/1-147.dcm           \n",
      "  inflating: dcm/1-148.dcm           \n",
      "  inflating: dcm/1-149.dcm           \n",
      "  inflating: dcm/1-150.dcm           \n",
      "  inflating: dcm/1-151.dcm           \n",
      "  inflating: dcm/1-152.dcm           \n",
      "  inflating: dcm/1-153.dcm           \n",
      "  inflating: dcm/1-154.dcm           \n",
      "  inflating: dcm/1-155.dcm           \n",
      "  inflating: dcm/1-156.dcm           \n",
      "  inflating: dcm/1-157.dcm           \n",
      "  inflating: dcm/1-158.dcm           \n",
      "  inflating: dcm/1-159.dcm           \n",
      "  inflating: dcm/1-160.dcm           \n",
      "  inflating: dcm/1-161.dcm           \n",
      "  inflating: dcm/1-162.dcm           \n",
      "  inflating: dcm/1-163.dcm           \n",
      "  inflating: dcm/1-164.dcm           \n",
      "  inflating: dcm/1-165.dcm           \n",
      "  inflating: dcm/1-166.dcm           \n",
      "  inflating: dcm/1-167.dcm           \n",
      "  inflating: dcm/1-168.dcm           \n",
      "  inflating: dcm/1-169.dcm           \n",
      "  inflating: dcm/1-170.dcm           \n",
      "  inflating: dcm/1-171.dcm           \n",
      "  inflating: dcm/1-172.dcm           \n",
      "  inflating: dcm/1-173.dcm           \n",
      "  inflating: dcm/1-174.dcm           \n",
      "  inflating: dcm/1-175.dcm           \n",
      "  inflating: dcm/1-176.dcm           \n",
      "  inflating: dcm/1-177.dcm           \n",
      "  inflating: dcm/1-178.dcm           \n",
      "  inflating: dcm/1-179.dcm           \n",
      "  inflating: dcm/1-180.dcm           \n",
      "  inflating: dcm/1-181.dcm           \n",
      "  inflating: dcm/1-182.dcm           \n",
      "  inflating: dcm/1-183.dcm           \n",
      "  inflating: dcm/1-184.dcm           \n",
      "  inflating: dcm/1-185.dcm           \n",
      "  inflating: dcm/1-186.dcm           \n",
      "  inflating: dcm/1-187.dcm           \n",
      "  inflating: dcm/1-188.dcm           \n",
      "  inflating: dcm/1-189.dcm           \n",
      "  inflating: dcm/1-190.dcm           \n",
      "  inflating: dcm/1-191.dcm           \n",
      "  inflating: dcm/1-192.dcm           \n",
      "  inflating: dcm/1-193.dcm           \n",
      "  inflating: dcm/1-194.dcm           \n",
      "  inflating: dcm/1-195.dcm           \n",
      "  inflating: dcm/1-196.dcm           \n",
      "  inflating: dcm/1-197.dcm           \n",
      "  inflating: dcm/1-198.dcm           \n",
      "  inflating: dcm/1-199.dcm           \n",
      "  inflating: dcm/1-200.dcm           \n",
      "  inflating: dcm/1-201.dcm           \n",
      "  inflating: dcm/1-202.dcm           \n",
      "  inflating: dcm/1-203.dcm           \n",
      "  inflating: dcm/1-204.dcm           \n",
      "  inflating: model.ts                \n",
      "model.ts\n"
     ]
    }
   ],
   "source": [
    "# Download ai_spleen_bundle_data test data zip file. Please request access and download manually.\n",
    "# !pip install gdown\n",
    "# !gdown \"https://drive.google.com/uc?id=1IwWMpbo2fd38fKIqeIdL8SKTGvkn31tK\"\n",
    "\n",
    "# After downloading ai_spleen_bundle_data zip file from the web browser or using gdown,\n",
    "!unzip -o \"ai_spleen_seg_bundle_data.zip\"\n",
    "\n",
    "# Need to copy the model.ts file to its own clean subfolder for packaging, to work around an issue in the Packager\n",
    "models_folder = \"models\"\n",
    "!rm -rf {models_folder} && mkdir -p {models_folder}/model && cp model.ts {models_folder}/model && ls {models_folder}/model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "env: HOLOSCAN_INPUT_PATH=dcm\n",
      "env: HOLOSCAN_MODEL_PATH=models\n",
      "env: HOLOSCAN_OUTPUT_PATH=output\n"
     ]
    }
   ],
   "source": [
    "%env HOLOSCAN_INPUT_PATH dcm\n",
    "%env HOLOSCAN_MODEL_PATH {models_folder}\n",
    "%env HOLOSCAN_OUTPUT_PATH output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setup imports\n",
    "\n",
    "Let's import necessary classes/decorators to define Application and Operator."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import logging\n",
    "from numpy import uint8  # Needed if SaveImaged is enabled\n",
    "from pathlib import Path\n",
    "\n",
    "# Required for setting SegmentDescription attributes. Direct import as this is not part of App SDK package.\n",
    "from pydicom.sr.codedict import codes\n",
    "\n",
    "from monai.deploy.conditions import CountCondition\n",
    "from monai.deploy.core import AppContext, Application, ConditionType, Fragment, Operator, OperatorSpec\n",
    "from monai.deploy.core.domain import Image\n",
    "from monai.deploy.core.io_type import IOType\n",
    "from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator\n",
    "from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription\n",
    "from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator\n",
    "from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator\n",
    "from monai.deploy.operators.monai_seg_inference_operator import InfererType, InMemImageReader, MonaiSegInferenceOperator\n",
    "\n",
    "from monai.transforms import (\n",
    "    Activationsd,\n",
    "    AsDiscreted,\n",
    "    Compose,\n",
    "    EnsureChannelFirstd,\n",
    "    EnsureTyped,\n",
    "    Invertd,\n",
    "    LoadImaged,\n",
    "    Orientationd,\n",
    "    SaveImaged,\n",
    "    ScaleIntensityRanged,\n",
    "    Spacingd,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Creating Model Specific Inference Operator classes\n",
    "\n",
    "Each Operator class inherits the base `Operator` class. The input/output properties are specified by implementing the `setup()` method, and the business logic implemented in the `compute()` method.\n",
    "\n",
    "The App SDK provides a `MonaiSegInferenceOperator` class to perform segmentation prediction with a Torch Script model. For consistency, this class uses MONAI dictionary-based transforms, as `Compose` object, for pre and post transforms. The model-specific inference operator will then only need to create the pre and post transform `Compose` based on what has been used in the model during training and validation. Note that for deploy application, `ignite` is not needed nor supported.\n",
    "\n",
    "#### SpleenSegOperator\n",
    "\n",
    "The `SpleenSegOperator` gets as input an in-memory [Image](/modules/_autosummary/monai.deploy.core.domain.Image) object that has been converted from a DICOM CT series by the preceding `DICOMSeriesToVolumeOperator`, and as output in-memory segmentation [Image](/modules/_autosummary/monai.deploy.core.domain.Image) object.\n",
    "\n",
    "The `pre_process` function creates the pre-transforms `Compose` object. For `LoadImage`, a specialized `InMemImageReader`, derived from MONAI `ImageReader`, is used to convert the in-memory pixel data and return the `numpy` array as well as the meta-data. Also, the DICOM input pixel spacings are often not the same as expected by the model, so the `Spacingd` transform must be used to re-sample the image with the expected spacing.\n",
    "\n",
    "The `post_process` function creates the post-transform `Compose` object. The `SaveImageD` transform class is used to save the segmentation mask as NIfTI image file, which is optional as the in-memory mask image will be passed down to the DICOM Segmentation writer for creating a DICOM Segmentation instance. The `Invertd` must also be used to revert the segmentation image's orientation and spacing to be the same as the input.\n",
    "\n",
    "When the `MonaiSegInferenceOperator` object is created, the `ROI` size is specified, as well as the transform `Compose` objects. Furthermore, the dataset image key names are set accordingly.\n",
    "\n",
    "Loading of the model and performing the prediction are encapsulated in the `MonaiSegInferenceOperator` and other SDK classes. Once the inference is completed, the segmentation [Image](/modules/_autosummary/monai.deploy.core.domain.Image) object is created and set to the output by the `SpleenSegOperator`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SpleenSegOperator(Operator):\n",
    "    \"\"\"Performs Spleen segmentation with a 3D image converted from a DICOM CT series.\n",
    "    \"\"\"\n",
    "\n",
    "    DEFAULT_OUTPUT_FOLDER = Path.cwd() / \"output/saved_images_folder\"\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        fragment: Fragment,\n",
    "        *args,\n",
    "        app_context: AppContext,\n",
    "        model_path: Path,\n",
    "        output_folder: Path = DEFAULT_OUTPUT_FOLDER,\n",
    "        **kwargs,\n",
    "    ):\n",
    "\n",
    "        self.logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "        self._input_dataset_key = \"image\"\n",
    "        self._pred_dataset_key = \"pred\"\n",
    "\n",
    "        self.model_path = model_path\n",
    "        self.output_folder = output_folder\n",
    "        self.output_folder.mkdir(parents=True, exist_ok=True)\n",
    "        self.app_context = app_context\n",
    "        self.input_name_image = \"image\"\n",
    "        self.output_name_seg = \"seg_image\"\n",
    "        self.output_name_saved_images_folder = \"saved_images_folder\"\n",
    "\n",
    "        # The base class has an attribute called fragment to hold the reference to the fragment object\n",
    "        super().__init__(fragment, *args, **kwargs)\n",
    "\n",
    "    def setup(self, spec: OperatorSpec):\n",
    "        spec.input(self.input_name_image)\n",
    "        spec.output(self.output_name_seg)\n",
    "        spec.output(self.output_name_saved_images_folder).condition(\n",
    "            ConditionType.NONE\n",
    "        )  # Output not requiring a receiver\n",
    "\n",
    "    def compute(self, op_input, op_output, context):\n",
    "        input_image = op_input.receive(self.input_name_image)\n",
    "        if not input_image:\n",
    "            raise ValueError(\"Input image is not found.\")\n",
    "\n",
    "        # This operator gets an in-memory Image object, so a specialized ImageReader is needed.\n",
    "        _reader = InMemImageReader(input_image)\n",
    "\n",
    "        pre_transforms = self.pre_process(_reader, str(self.output_folder))\n",
    "        post_transforms = self.post_process(pre_transforms, str(self.output_folder))\n",
    "\n",
    "        # Delegates inference and saving output to the built-in operator.\n",
    "        infer_operator = MonaiSegInferenceOperator(\n",
    "            self.fragment,\n",
    "            roi_size=(\n",
    "                96,\n",
    "                96,\n",
    "                96,\n",
    "            ),\n",
    "            pre_transforms=pre_transforms,\n",
    "            post_transforms=post_transforms,\n",
    "            overlap=0.6,\n",
    "            app_context=self.app_context,\n",
    "            model_name=\"\",\n",
    "            inferer=InfererType.SLIDING_WINDOW,\n",
    "            sw_batch_size=4,\n",
    "            model_path=self.model_path,\n",
    "            name=\"monai_seg_inference_op\",\n",
    "        )\n",
    "\n",
    "        # Setting the keys used in the dictionary based transforms may change.\n",
    "        infer_operator.input_dataset_key = self._input_dataset_key\n",
    "        infer_operator.pred_dataset_key = self._pred_dataset_key\n",
    "\n",
    "        # Now emit data to the output ports of this operator\n",
    "        op_output.emit(infer_operator.compute_impl(input_image, context), self.output_name_seg)\n",
    "        op_output.emit(self.output_folder, self.output_name_saved_images_folder)\n",
    "\n",
    "    def pre_process(self, img_reader, out_dir: str = \"./input_images\") -> Compose:\n",
    "        \"\"\"Composes transforms for preprocessing input before predicting on a model.\"\"\"\n",
    "\n",
    "        Path(out_dir).mkdir(parents=True, exist_ok=True)\n",
    "        my_key = self._input_dataset_key\n",
    "\n",
    "        return Compose(\n",
    "            [\n",
    "                LoadImaged(keys=my_key, reader=img_reader),\n",
    "                EnsureChannelFirstd(keys=my_key),\n",
    "                # The SaveImaged transform can be commented out to save 5 seconds.\n",
    "                # Uncompress NIfTI file, nii, is used favoring speed over size, but can be changed to nii.gz\n",
    "                SaveImaged(\n",
    "                    keys=my_key,\n",
    "                    output_dir=out_dir,\n",
    "                    output_postfix=\"\",\n",
    "                    resample=False,\n",
    "                    output_ext=\".nii\",\n",
    "                ),\n",
    "                Orientationd(keys=my_key, axcodes=\"RAS\"),\n",
    "                Spacingd(keys=my_key, pixdim=[1.5, 1.5, 2.9], mode=[\"bilinear\"]),\n",
    "                ScaleIntensityRanged(keys=my_key, a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),\n",
    "                EnsureTyped(keys=my_key),\n",
    "            ]\n",
    "        )\n",
    "\n",
    "    def post_process(self, pre_transforms: Compose, out_dir: str = \"./prediction_output\") -> Compose:\n",
    "        \"\"\"Composes transforms for postprocessing the prediction results.\"\"\"\n",
    "\n",
    "        Path(out_dir).mkdir(parents=True, exist_ok=True)\n",
    "        pred_key = self._pred_dataset_key\n",
    "\n",
    "        return Compose(\n",
    "            [\n",
    "                Activationsd(keys=pred_key, softmax=True),\n",
    "                Invertd(\n",
    "                    keys=pred_key,\n",
    "                    transform=pre_transforms,\n",
    "                    orig_keys=self._input_dataset_key,\n",
    "                    nearest_interp=False,\n",
    "                    to_tensor=True,\n",
    "                ),\n",
    "                AsDiscreted(keys=pred_key, argmax=True),\n",
    "                # The SaveImaged transform can be commented out to save 5 seconds.\n",
    "                # Uncompress NIfTI file, nii, is used favoring speed over size, but can be changed to nii.gz\n",
    "                SaveImaged(\n",
    "                    keys=pred_key,\n",
    "                    output_dir=out_dir,\n",
    "                    output_postfix=\"seg\",\n",
    "                    output_dtype=uint8,\n",
    "                    resample=False,\n",
    "                    output_ext=\".nii\",\n",
    "                ),\n",
    "            ]\n",
    "        )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Creating Application class\n",
    "\n",
    "Our application class would look like below.\n",
    "\n",
    "It defines `App` class, inheriting the base `Application` class.\n",
    "\n",
    "The base class method, `compose`, is overridden. Objects required for DICOM parsing, series selection, pixel data conversion to volume image, and segmentation instance creation are created, so is the model-specific `SpleenSegOperator`. The execution pipeline, as a Directed Acyclic Graph (DAG), is created by connecting these objects through the `add_flow` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "class AISpleenSegApp(Application):\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        \"\"\"Creates an application instance.\"\"\"\n",
    "\n",
    "        super().__init__(*args, **kwargs)\n",
    "        self._logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "\n",
    "    def run(self, *args, **kwargs):\n",
    "        # This method calls the base class to run. Can be omitted if simply calling through.\n",
    "        self._logger.info(f\"Begin {self.run.__name__}\")\n",
    "        super().run(*args, **kwargs)\n",
    "        self._logger.info(f\"End {self.run.__name__}\")\n",
    "\n",
    "    def compose(self):\n",
    "        \"\"\"Creates the app specific operators and chain them up in the processing DAG.\"\"\"\n",
    "\n",
    "        self._logger.debug(f\"Begin {self.compose.__name__}\")\n",
    "        app_context = Application.init_app_context({})  # Do not pass argv in Jupyter Notebook\n",
    "        app_input_path = Path(app_context.input_path)\n",
    "        app_output_path = Path(app_context.output_path)\n",
    "        model_path = Path(app_context.model_path)\n",
    "\n",
    "        self._logger.info(f\"App input and output path: {app_input_path}, {app_output_path}\")\n",
    "\n",
    "        # instantiates the SDK built-in operator(s).\n",
    "        study_loader_op = DICOMDataLoaderOperator(\n",
    "            self, CountCondition(self, 1), input_folder=app_input_path, name=\"dcm_loader_op\"\n",
    "        )\n",
    "        series_selector_op = DICOMSeriesSelectorOperator(self, rules=Sample_Rules_Text, name=\"series_selector_op\")\n",
    "        series_to_vol_op = DICOMSeriesToVolumeOperator(self, name=\"series_to_vol_op\")\n",
    "\n",
    "        # Model specific inference operator, supporting MONAI transforms.\n",
    "        spleen_seg_op = SpleenSegOperator(self, app_context=app_context, model_path=model_path, name=\"seg_op\")\n",
    "\n",
    "        # Create DICOM Seg writer providing the required segment description for each segment with\n",
    "        # the actual algorithm and the pertinent organ/tissue.\n",
    "        # The segment_label, algorithm_name, and algorithm_version are limited to 64 chars.\n",
    "        # https://dicom.nema.org/medical/dicom/current/output/chtml/part05/sect_6.2.html\n",
    "        # User can Look up SNOMED CT codes at, e.g.\n",
    "        # https://bioportal.bioontology.org/ontologies/SNOMEDCT\n",
    "\n",
    "        _algorithm_name = \"3D segmentation of the Spleen from a CT series\"\n",
    "        _algorithm_family = codes.DCM.ArtificialIntelligence\n",
    "        _algorithm_version = \"0.1.0\"\n",
    "\n",
    "        segment_descriptions = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Spleen\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Spleen,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "        ]\n",
    "\n",
    "        custom_tags = {\"SeriesDescription\": \"AI generated Seg, not for clinical use.\"}\n",
    "\n",
    "        dicom_seg_writer = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=segment_descriptions,\n",
    "            custom_tags=custom_tags,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dcm_seg_writer_op\",\n",
    "        )\n",
    "\n",
    "        # Create the processing pipeline, by specifying the source and destination operators, and\n",
    "        # ensuring the output from the former matches the input of the latter, in both name and type.\n",
    "        self.add_flow(study_loader_op, series_selector_op, {(\"dicom_study_list\", \"dicom_study_list\")})\n",
    "        self.add_flow(\n",
    "            series_selector_op, series_to_vol_op, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(series_to_vol_op, spleen_seg_op, {(\"image\", \"image\")})\n",
    "\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a source operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op, dicom_seg_writer, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(spleen_seg_op, dicom_seg_writer, {(\"seg_image\", \"seg_image\")})\n",
    "\n",
    "        self._logger.debug(f\"End {self.compose.__name__}\")\n",
    "\n",
    "\n",
    "# This is a sample series selection rule in JSON, simply selecting CT series.\n",
    "# If the study has more than 1 CT series, then all of them will be selected.\n",
    "# Please see more detail in DICOMSeriesSelectorOperator.\n",
    "# For list of string values, e.g. \"ImageType\": [\"PRIMARY\", \"ORIGINAL\"], it is a match if all elements\n",
    "# are all in the multi-value attribute of the DICOM series.\n",
    "\n",
    "Sample_Rules_Text = \"\"\"\n",
    "{\n",
    "    \"selections\": [\n",
    "        {\n",
    "            \"name\": \"CT Series\",\n",
    "            \"conditions\": {\n",
    "                \"StudyDescription\": \"(.*?)\",\n",
    "                \"Modality\": \"(?i)CT\",\n",
    "                \"SeriesDescription\": \"(.*?)\",\n",
    "                \"ImageType\": [\"PRIMARY\", \"ORIGINAL\"]\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Executing app locally\n",
    "\n",
    "We can execute the app in Jupyter notebook. Note that the DICOM files of the CT Abdomen series must be present in the `dcm` folder and the TorchScript, `model.ts`, in the folder pointed to by the environment variables.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[info] [fragment.cpp:705] Loading extensions from configs...\n",
      "[2025-04-22 10:06:42,869] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=[])\n",
      "[2025-04-22 10:06:42,879] [INFO] (root) - AppContext object: AppContext(input_path=dcm, output_path=output, model_path=models, workdir=), triton_server_netloc=\n",
      "[2025-04-22 10:06:42,880] [INFO] (__main__.AISpleenSegApp) - App input and output path: dcm, output\n",
      "[info] [gxf_executor.cpp:265] Creating context\n",
      "[info] [gxf_executor.cpp:2396] Activating Graph...\n",
      "[info] [gxf_executor.cpp:2426] Running Graph...\n",
      "[info] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "[info] [greedy_scheduler.cpp:191] Scheduling 5 entities\n",
      "[2025-04-22 10:06:42,916] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "[2025-04-22 10:06:43,413] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "[2025-04-22 10:06:43,414] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "  # of series: 1\n",
      "[2025-04-22 10:06:43,416] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:43,417] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "[2025-04-22 10:06:43,418] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "[2025-04-22 10:06:43,420] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:43,421] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "[2025-04-22 10:06:43,422] [INFO] (root) -     Series attribute Modality value: CT\n",
      "[2025-04-22 10:06:43,426] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:43,427] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "[2025-04-22 10:06:43,429] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 10:06:43,430] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:43,432] [INFO] (root) - On attribute: 'ImageType' to match value: ['PRIMARY', 'ORIGINAL']\n",
      "[2025-04-22 10:06:43,434] [INFO] (root) -     Series attribute ImageType value: None\n",
      "[2025-04-22 10:06:43,436] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:43,437] [INFO] (root) - Series Selection finalized.\n",
      "[2025-04-22 10:06:43,438] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 10:06:43,438] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:43,669] [INFO] (root) - Casting to float32\n",
      "[2025-04-22 10:06:43,735] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Converted Image object metadata:\n",
      "[2025-04-22 10:06:43,737] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239, type <class 'str'>\n",
      "[2025-04-22 10:06:43,737] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDate: 20090831, type <class 'str'>\n",
      "[2025-04-22 10:06:43,738] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesTime: 101721.452, type <class 'str'>\n",
      "[2025-04-22 10:06:43,739] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Modality: CT, type <class 'str'>\n",
      "[2025-04-22 10:06:43,739] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDescription: ABD/PANC 3.0 B31f, type <class 'str'>\n",
      "[2025-04-22 10:06:43,740] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - PatientPosition: HFS, type <class 'str'>\n",
      "[2025-04-22 10:06:43,740] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesNumber: 8, type <class 'int'>\n",
      "[2025-04-22 10:06:43,741] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "[2025-04-22 10:06:43,742] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "[2025-04-22 10:06:43,743] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_pixel_spacing: 1.5, type <class 'float'>\n",
      "[2025-04-22 10:06:43,744] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_direction_cosine: [1.0, 0.0, 0.0], type <class 'list'>\n",
      "[2025-04-22 10:06:43,745] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_direction_cosine: [0.0, 1.0, 0.0], type <class 'list'>\n",
      "[2025-04-22 10:06:43,746] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_direction_cosine: [0.0, 0.0, 1.0], type <class 'list'>\n",
      "[2025-04-22 10:06:43,747] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - dicom_affine_transform: [[   0.7890625    0.           0.        -197.60547  ]\n",
      " [   0.           0.7890625    0.        -398.60547  ]\n",
      " [   0.           0.           1.5       -383.       ]\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "[2025-04-22 10:06:43,749] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - nifti_affine_transform: [[  -0.7890625   -0.          -0.         197.60547  ]\n",
      " [  -0.          -0.7890625   -0.         398.60547  ]\n",
      " [   0.           0.           1.5       -383.       ]\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "[2025-04-22 10:06:43,750] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291, type <class 'str'>\n",
      "[2025-04-22 10:06:43,751] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyID: , type <class 'str'>\n",
      "[2025-04-22 10:06:43,754] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDate: 20090831, type <class 'str'>\n",
      "[2025-04-22 10:06:43,755] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyTime: 095948.599, type <class 'str'>\n",
      "[2025-04-22 10:06:43,756] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDescription: CT ABDOMEN W IV CONTRAST, type <class 'str'>\n",
      "[2025-04-22 10:06:43,757] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - AccessionNumber: 5471978513296937, type <class 'str'>\n",
      "[2025-04-22 10:06:43,758] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - selection_name: CT Series, type <class 'str'>\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2025-04-22 10:06:44,648 INFO image_writer.py:197 - writing: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626.nii\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[2025-04-22 10:06:46,578] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Input of <class 'monai.data.meta_tensor.MetaTensor'> shape: torch.Size([1, 1, 270, 270, 106])\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2025-04-22 10:06:47,732 INFO image_writer.py:197 - writing: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626_seg.nii\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[2025-04-22 10:06:49,272] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform length/batch size of output: 1\n",
      "[2025-04-22 10:06:49,277] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pixel spacings for pred: tensor([0.7891, 0.7891, 1.5000], dtype=torch.float64)\n",
      "[2025-04-22 10:06:49,408] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pred of <class 'numpy.ndarray'> shape: (1, 512, 512, 204)\n",
      "[2025-04-22 10:06:49,448] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image numpy array of type <class 'numpy.ndarray'> shape: (204, 512, 512)\n",
      "[2025-04-22 10:06:49,453] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image pixel max value: 1\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "  check_person_name(patient_name)\n",
      "[2025-04-22 10:06:50,720] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:06:50,721] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 10:06:50,722] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:06:50,723] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 10:06:50,725] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 10:06:50,726] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:06:50,727] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 10:06:50,728] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 10:06:50,730] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[info] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "[info] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "[info] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "[info] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "[2025-04-22 10:06:50,833] [INFO] (__main__.AISpleenSegApp) - End run\n"
     ]
    }
   ],
   "source": [
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "app = AISpleenSegApp()\n",
    "app.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the application is verified inside Jupyter notebook, we can write the above Python code into Python files in an application folder.\n",
    "\n",
    "The application folder structure would look like below:\n",
    "\n",
    "```bash\n",
    "my_app\n",
    "├── __main__.py\n",
    "├── app.py\n",
    "└── spleen_seg_operator.py\n",
    "```\n",
    "\n",
    ":::{note}\n",
    "We can create a single application Python file (such as `spleen_app.py`) that includes the content of the files, instead of creating multiple files.\n",
    "You will see such an example in <a href=\"./02_mednist_app.html#executing-app-locally\">MedNist Classifier Tutorial</a>.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create an application folder\n",
    "!mkdir -p my_app && rm -rf my_app/*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### spleen_seg_operator.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/spleen_seg_operator.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/spleen_seg_operator.py\n",
    "import logging\n",
    "\n",
    "from numpy import uint8\n",
    "from pathlib import Path\n",
    "\n",
    "from monai.deploy.core import AppContext, ConditionType, Fragment, Operator, OperatorSpec\n",
    "from monai.deploy.operators.monai_seg_inference_operator import InfererType, InMemImageReader, MonaiSegInferenceOperator\n",
    "from monai.transforms import (\n",
    "    Activationsd,\n",
    "    AsDiscreted,\n",
    "    Compose,\n",
    "    EnsureChannelFirstd,\n",
    "    EnsureTyped,\n",
    "    Invertd,\n",
    "    LoadImaged,\n",
    "    Orientationd,\n",
    "    SaveImaged,\n",
    "    ScaleIntensityRanged,\n",
    "    Spacingd,\n",
    ")\n",
    "\n",
    "class SpleenSegOperator(Operator):\n",
    "    \"\"\"Performs Spleen segmentation with a 3D image converted from a DICOM CT series.\n",
    "    \"\"\"\n",
    "\n",
    "    DEFAULT_OUTPUT_FOLDER = Path.cwd() / \"output/saved_images_folder\"\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        fragment: Fragment,\n",
    "        *args,\n",
    "        app_context: AppContext,\n",
    "        model_path: Path,\n",
    "        output_folder: Path = DEFAULT_OUTPUT_FOLDER,\n",
    "        **kwargs,\n",
    "    ):\n",
    "\n",
    "        self.logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "        self._input_dataset_key = \"image\"\n",
    "        self._pred_dataset_key = \"pred\"\n",
    "\n",
    "        self.model_path = model_path\n",
    "        self.output_folder = output_folder\n",
    "        self.output_folder.mkdir(parents=True, exist_ok=True)\n",
    "        self.app_context = app_context\n",
    "        self.input_name_image = \"image\"\n",
    "        self.output_name_seg = \"seg_image\"\n",
    "        self.output_name_saved_images_folder = \"saved_images_folder\"\n",
    "\n",
    "        # The base class has an attribute called fragment to hold the reference to the fragment object\n",
    "        super().__init__(fragment, *args, **kwargs)\n",
    "\n",
    "    def setup(self, spec: OperatorSpec):\n",
    "        spec.input(self.input_name_image)\n",
    "        spec.output(self.output_name_seg)\n",
    "        spec.output(self.output_name_saved_images_folder).condition(\n",
    "            ConditionType.NONE\n",
    "        )  # Output not requiring a receiver\n",
    "\n",
    "    def compute(self, op_input, op_output, context):\n",
    "        input_image = op_input.receive(self.input_name_image)\n",
    "        if not input_image:\n",
    "            raise ValueError(\"Input image is not found.\")\n",
    "\n",
    "        # This operator gets an in-memory Image object, so a specialized ImageReader is needed.\n",
    "        _reader = InMemImageReader(input_image)\n",
    "\n",
    "        pre_transforms = self.pre_process(_reader, str(self.output_folder))\n",
    "        post_transforms = self.post_process(pre_transforms, str(self.output_folder))\n",
    "\n",
    "        # Delegates inference and saving output to the built-in operator.\n",
    "        infer_operator = MonaiSegInferenceOperator(\n",
    "            self.fragment,\n",
    "            roi_size=(\n",
    "                96,\n",
    "                96,\n",
    "                96,\n",
    "            ),\n",
    "            pre_transforms=pre_transforms,\n",
    "            post_transforms=post_transforms,\n",
    "            overlap=0.6,\n",
    "            app_context=self.app_context,\n",
    "            model_name=\"\",\n",
    "            inferer=InfererType.SLIDING_WINDOW,\n",
    "            sw_batch_size=4,\n",
    "            model_path=self.model_path,\n",
    "            name=\"monai_seg_inference_op\",\n",
    "        )\n",
    "\n",
    "        # Setting the keys used in the dictionary based transforms may change.\n",
    "        infer_operator.input_dataset_key = self._input_dataset_key\n",
    "        infer_operator.pred_dataset_key = self._pred_dataset_key\n",
    "\n",
    "        # Now emit data to the output ports of this operator\n",
    "        op_output.emit(infer_operator.compute_impl(input_image, context), self.output_name_seg)\n",
    "        op_output.emit(self.output_folder, self.output_name_saved_images_folder)\n",
    "\n",
    "    def pre_process(self, img_reader, out_dir: str = \"./input_images\") -> Compose:\n",
    "        \"\"\"Composes transforms for preprocessing input before predicting on a model.\"\"\"\n",
    "\n",
    "        Path(out_dir).mkdir(parents=True, exist_ok=True)\n",
    "        my_key = self._input_dataset_key\n",
    "\n",
    "        return Compose(\n",
    "            [\n",
    "                LoadImaged(keys=my_key, reader=img_reader),\n",
    "                EnsureChannelFirstd(keys=my_key),\n",
    "                # The SaveImaged transform can be commented out to save 5 seconds.\n",
    "                # Uncompress NIfTI file, nii, is used favoring speed over size, but can be changed to nii.gz\n",
    "                SaveImaged(\n",
    "                    keys=my_key,\n",
    "                    output_dir=out_dir,\n",
    "                    output_postfix=\"\",\n",
    "                    resample=False,\n",
    "                    output_ext=\".nii\",\n",
    "                ),\n",
    "                Orientationd(keys=my_key, axcodes=\"RAS\"),\n",
    "                Spacingd(keys=my_key, pixdim=[1.5, 1.5, 2.9], mode=[\"bilinear\"]),\n",
    "                ScaleIntensityRanged(keys=my_key, a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),\n",
    "                EnsureTyped(keys=my_key),\n",
    "            ]\n",
    "        )\n",
    "\n",
    "    def post_process(self, pre_transforms: Compose, out_dir: str = \"./prediction_output\") -> Compose:\n",
    "        \"\"\"Composes transforms for postprocessing the prediction results.\"\"\"\n",
    "\n",
    "        Path(out_dir).mkdir(parents=True, exist_ok=True)\n",
    "        pred_key = self._pred_dataset_key\n",
    "\n",
    "        return Compose(\n",
    "            [\n",
    "                Activationsd(keys=pred_key, softmax=True),\n",
    "                Invertd(\n",
    "                    keys=pred_key,\n",
    "                    transform=pre_transforms,\n",
    "                    orig_keys=self._input_dataset_key,\n",
    "                    nearest_interp=False,\n",
    "                    to_tensor=True,\n",
    "                ),\n",
    "                AsDiscreted(keys=pred_key, argmax=True),\n",
    "                # The SaveImaged transform can be commented out to save 5 seconds.\n",
    "                # Uncompress NIfTI file, nii, is used favoring speed over size, but can be changed to nii.gz\n",
    "                SaveImaged(\n",
    "                    keys=pred_key,\n",
    "                    output_dir=out_dir,\n",
    "                    output_postfix=\"seg\",\n",
    "                    output_dtype=uint8,\n",
    "                    resample=False,\n",
    "                    output_ext=\".nii\",\n",
    "                ),\n",
    "            ]\n",
    "        )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### app.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/app.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/app.py\n",
    "import logging\n",
    "from pathlib import Path\n",
    "\n",
    "from spleen_seg_operator import SpleenSegOperator\n",
    "\n",
    "from pydicom.sr.codedict import codes  # Required for setting SegmentDescription attributes.\n",
    "\n",
    "from monai.deploy.conditions import CountCondition\n",
    "from monai.deploy.core import AppContext, Application\n",
    "from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator\n",
    "from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription\n",
    "from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator\n",
    "from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator\n",
    "from monai.deploy.operators.stl_conversion_operator import STLConversionOperator\n",
    "\n",
    "class AISpleenSegApp(Application):\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        \"\"\"Creates an application instance.\"\"\"\n",
    "\n",
    "        super().__init__(*args, **kwargs)\n",
    "        self._logger = logging.getLogger(\"{}.{}\".format(__name__, type(self).__name__))\n",
    "\n",
    "    def run(self, *args, **kwargs):\n",
    "        # This method calls the base class to run. Can be omitted if simply calling through.\n",
    "        self._logger.info(f\"Begin {self.run.__name__}\")\n",
    "        super().run(*args, **kwargs)\n",
    "        self._logger.info(f\"End {self.run.__name__}\")\n",
    "\n",
    "    def compose(self):\n",
    "        \"\"\"Creates the app specific operators and chain them up in the processing DAG.\"\"\"\n",
    "\n",
    "        # Use Commandline options over environment variables to init context.\n",
    "        app_context = Application.init_app_context(self.argv)\n",
    "        self._logger.debug(f\"Begin {self.compose.__name__}\")\n",
    "        app_input_path = Path(app_context.input_path)\n",
    "        app_output_path = Path(app_context.output_path)\n",
    "        model_path = Path(app_context.model_path)\n",
    "\n",
    "        self._logger.info(f\"App input and output path: {app_input_path}, {app_output_path}\")\n",
    "\n",
    "        # instantiates the SDK built-in operator(s).\n",
    "        study_loader_op = DICOMDataLoaderOperator(\n",
    "            self, CountCondition(self, 1), input_folder=app_input_path, name=\"dcm_loader_op\"\n",
    "        )\n",
    "        series_selector_op = DICOMSeriesSelectorOperator(self, rules=Sample_Rules_Text, name=\"series_selector_op\")\n",
    "        series_to_vol_op = DICOMSeriesToVolumeOperator(self, name=\"series_to_vol_op\")\n",
    "\n",
    "        # Model specific inference operator, supporting MONAI transforms.\n",
    "        spleen_seg_op = SpleenSegOperator(self, app_context=app_context, model_path=model_path, name=\"seg_op\")\n",
    "\n",
    "        # Create DICOM Seg writer providing the required segment description for each segment with\n",
    "        # the actual algorithm and the pertinent organ/tissue.\n",
    "        # The segment_label, algorithm_name, and algorithm_version are limited to 64 chars.\n",
    "        # https://dicom.nema.org/medical/dicom/current/output/chtml/part05/sect_6.2.html\n",
    "        # User can Look up SNOMED CT codes at, e.g.\n",
    "        # https://bioportal.bioontology.org/ontologies/SNOMEDCT\n",
    "\n",
    "        _algorithm_name = \"3D segmentation of the Spleen from a CT series\"\n",
    "        _algorithm_family = codes.DCM.ArtificialIntelligence\n",
    "        _algorithm_version = \"0.1.0\"\n",
    "\n",
    "        segment_descriptions = [\n",
    "            SegmentDescription(\n",
    "                segment_label=\"Spleen\",\n",
    "                segmented_property_category=codes.SCT.Organ,\n",
    "                segmented_property_type=codes.SCT.Spleen,\n",
    "                algorithm_name=_algorithm_name,\n",
    "                algorithm_family=_algorithm_family,\n",
    "                algorithm_version=_algorithm_version,\n",
    "            ),\n",
    "        ]\n",
    "\n",
    "        custom_tags = {\"SeriesDescription\": \"AI generated Seg, not for clinical use.\"}\n",
    "\n",
    "        dicom_seg_writer = DICOMSegmentationWriterOperator(\n",
    "            self,\n",
    "            segment_descriptions=segment_descriptions,\n",
    "            custom_tags=custom_tags,\n",
    "            output_folder=app_output_path,\n",
    "            name=\"dcm_seg_writer_op\",\n",
    "        )\n",
    "\n",
    "        # Create the processing pipeline, by specifying the source and destination operators, and\n",
    "        # ensuring the output from the former matches the input of the latter, in both name and type.\n",
    "        self.add_flow(study_loader_op, series_selector_op, {(\"dicom_study_list\", \"dicom_study_list\")})\n",
    "        self.add_flow(\n",
    "            series_selector_op, series_to_vol_op, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(series_to_vol_op, spleen_seg_op, {(\"image\", \"image\")})\n",
    "\n",
    "        # Note below the dicom_seg_writer requires two inputs, each coming from a source operator.\n",
    "        self.add_flow(\n",
    "            series_selector_op, dicom_seg_writer, {(\"study_selected_series_list\", \"study_selected_series_list\")}\n",
    "        )\n",
    "        self.add_flow(spleen_seg_op, dicom_seg_writer, {(\"seg_image\", \"seg_image\")})\n",
    "\n",
    "        self._logger.debug(f\"End {self.compose.__name__}\")\n",
    "\n",
    "\n",
    "# This is a sample series selection rule in JSON, simply selecting CT series.\n",
    "# If the study has more than 1 CT series, then all of them will be selected.\n",
    "# Please see more detail in DICOMSeriesSelectorOperator.\n",
    "# For list of string values, e.g. \"ImageType\": [\"PRIMARY\", \"ORIGINAL\"], it is a match if all elements\n",
    "# are all in the multi-value attribute of the DICOM series.\n",
    "\n",
    "Sample_Rules_Text = \"\"\"\n",
    "{\n",
    "    \"selections\": [\n",
    "        {\n",
    "            \"name\": \"CT Series\",\n",
    "            \"conditions\": {\n",
    "                \"StudyDescription\": \"(.*?)\",\n",
    "                \"Modality\": \"(?i)CT\",\n",
    "                \"SeriesDescription\": \"(.*?)\",\n",
    "                \"ImageType\": [\"PRIMARY\", \"ORIGINAL\"]\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\"\"\"\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # Creates the app and test it standalone.\n",
    "    AISpleenSegApp().run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```python\n",
    "if __name__ == \"__main__\":\n",
    "    AISpleenSegApp().run()\n",
    "```\n",
    "\n",
    "The above lines are needed to execute the application code by using `python` interpreter.\n",
    "\n",
    "### \\_\\_main\\_\\_.py\n",
    "\n",
    "\\_\\_main\\_\\_.py is needed for <a href=\"../../developing_with_sdk/packaging_app.html#required-arguments\">MONAI Application Packager</a> to detect the main application code (`app.py`) when the application is executed with the application folder path (e.g., `python simple_imaging_app`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/__main__.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/__main__.py\n",
    "from app import AISpleenSegApp\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    AISpleenSegApp().run()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "app.py\t__main__.py  spleen_seg_operator.py\n"
     ]
    }
   ],
   "source": [
    "!ls my_app"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this time, let's execute the app in the command line.\n",
    "\n",
    ":::{note}\n",
    "Since the environment variables have been set and contain the correct paths, it is not necessary to provide the command line options on running the application.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\u001b[32minfo\u001b[m] [fragment.cpp:705] Loading extensions from configs...\n",
      "[2025-04-22 10:06:55,953] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=['my_app'])\n",
      "[2025-04-22 10:06:55,955] [INFO] (root) - AppContext object: AppContext(input_path=dcm, output_path=output, model_path=models, workdir=), triton_server_netloc=\n",
      "[2025-04-22 10:06:55,955] [INFO] (app.AISpleenSegApp) - App input and output path: dcm, output\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:265] Creating context\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2396] Activating Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2426] Running Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:191] Scheduling 5 entities\n",
      "[2025-04-22 10:06:55,974] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "[2025-04-22 10:06:56,476] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "  # of series: 1\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) -     Series attribute Modality value: CT\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - On attribute: 'ImageType' to match value: ['PRIMARY', 'ORIGINAL']\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) -     Series attribute ImageType value: None\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series Selection finalized.\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "[2025-04-22 10:06:56,477] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "[2025-04-22 10:06:56,909] [INFO] (root) - Casting to float32\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Converted Image object metadata:\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDate: 20090831, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesTime: 101721.452, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Modality: CT, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDescription: ABD/PANC 3.0 B31f, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - PatientPosition: HFS, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesNumber: 8, type <class 'int'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_pixel_spacing: 1.5, type <class 'float'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_direction_cosine: [1.0, 0.0, 0.0], type <class 'list'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_direction_cosine: [0.0, 1.0, 0.0], type <class 'list'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_direction_cosine: [0.0, 0.0, 1.0], type <class 'list'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - dicom_affine_transform: [[   0.7890625    0.           0.        -197.60547  ]\n",
      " [   0.           0.7890625    0.        -398.60547  ]\n",
      " [   0.           0.           1.5       -383.       ]\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - nifti_affine_transform: [[  -0.7890625   -0.          -0.         197.60547  ]\n",
      " [  -0.          -0.7890625   -0.         398.60547  ]\n",
      " [   0.           0.           1.5       -383.       ]\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyID: , type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDate: 20090831, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyTime: 095948.599, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDescription: CT ABDOMEN W IV CONTRAST, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - AccessionNumber: 5471978513296937, type <class 'str'>\n",
      "[2025-04-22 10:06:57,053] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - selection_name: CT Series, type <class 'str'>\n",
      "2025-04-22 10:06:57,920 INFO image_writer.py:197 - writing: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626.nii\n",
      "[2025-04-22 10:06:59,901] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Input of <class 'monai.data.meta_tensor.MetaTensor'> shape: torch.Size([1, 1, 270, 270, 106])\n",
      "2025-04-22 10:07:00,973 INFO image_writer.py:197 - writing: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626_seg.nii\n",
      "[2025-04-22 10:07:02,569] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform length/batch size of output: 1\n",
      "[2025-04-22 10:07:02,570] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pixel spacings for pred: tensor([0.7891, 0.7891, 1.5000], dtype=torch.float64)\n",
      "[2025-04-22 10:07:02,697] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pred of <class 'numpy.ndarray'> shape: (1, 512, 512, 204)\n",
      "[2025-04-22 10:07:02,820] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image numpy array of type <class 'numpy.ndarray'> shape: (204, 512, 512)\n",
      "[2025-04-22 10:07:02,825] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image pixel max value: 1\n",
      "/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "  check_person_name(patient_name)\n",
      "[2025-04-22 10:07:04,062] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:07:04,062] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "[2025-04-22 10:07:04,062] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:07:04,062] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "[2025-04-22 10:07:04,063] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "[2025-04-22 10:07:04,063] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "[2025-04-22 10:07:04,063] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "[2025-04-22 10:07:04,063] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "[2025-04-22 10:07:04,063] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "[\u001b[32minfo\u001b[m] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "[2025-04-22 10:07:04,175] [INFO] (app.AISpleenSegApp) - End run\n",
      "[\u001b[32minfo\u001b[m] [gxf_executor.cpp:295] Destroying context\n"
     ]
    }
   ],
   "source": [
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "!python my_app"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "output:\n",
      "1.2.826.0.1.3680043.10.511.3.17902633705887989912813743024111302.dcm\n",
      "saved_images_folder\n",
      "\n",
      "output/saved_images_folder:\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626\n",
      "\n",
      "output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626:\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626.nii\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626_seg.nii\n"
     ]
    }
   ],
   "source": [
    "!ls -R $HOLOSCAN_OUTPUT_PATH"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Packaging app"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's package the app with [MONAI Application Packager](/developing_with_sdk/packaging_app).\n",
    "\n",
    "In this version of the App SDK, we need to write out the configuration yaml file as well as the package requirements file, in the application folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/app.yaml\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/app.yaml\n",
    "%YAML 1.2\n",
    "---\n",
    "application:\n",
    "  title: MONAI Deploy App Package - MONAI Bundle AI App\n",
    "  version: 1.0\n",
    "  inputFormats: [\"file\"]\n",
    "  outputFormats: [\"file\"]\n",
    "\n",
    "resources:\n",
    "  cpu: 1\n",
    "  gpu: 1\n",
    "  memory: 1Gi\n",
    "  gpuMemory: 6Gi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing my_app/requirements.txt\n"
     ]
    }
   ],
   "source": [
    "%%writefile my_app/requirements.txt\n",
    "highdicom>=0.18.2\n",
    "monai>=1.0\n",
    "nibabel>=3.2.1\n",
    "numpy>=1.21.6\n",
    "pydicom>=2.3.0\n",
    "setuptools>=59.5.0 # for pkg_resources\n",
    "SimpleITK>=2.0.0\n",
    "torch>=1.12.0\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can use the CLI package command to build the MONAI Application Package (MAP) container image based on a supported base image.\n",
    "\n",
    ":::{note}\n",
    "Building a MONAI Application Package (Docker image) can take time. Use `-l DEBUG` option to see the progress.\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-04-22 10:07:06,268] [INFO] (common) - Downloading CLI manifest file...\n",
      "[2025-04-22 10:07:06,478] [DEBUG] (common) - Validating CLI manifest file...\n",
      "[2025-04-22 10:07:06,478] [INFO] (packager.parameters) - Application: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app\n",
      "[2025-04-22 10:07:06,478] [INFO] (packager.parameters) - Detected application type: Python Module\n",
      "[2025-04-22 10:07:06,478] [INFO] (packager) - Scanning for models in /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/models...\n",
      "[2025-04-22 10:07:06,478] [DEBUG] (packager) - Model model=/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/models/model added.\n",
      "[2025-04-22 10:07:06,478] [INFO] (packager) - Reading application configuration from /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/app.yaml...\n",
      "[2025-04-22 10:07:06,480] [INFO] (packager) - Generating app.json...\n",
      "[2025-04-22 10:07:06,481] [INFO] (packager) - Generating pkg.json...\n",
      "[2025-04-22 10:07:06,483] [DEBUG] (common) - \n",
      "=============== Begin app.json ===============\n",
      "{\n",
      "    \"apiVersion\": \"1.0.0\",\n",
      "    \"command\": \"[\\\"python3\\\", \\\"/opt/holoscan/app\\\"]\",\n",
      "    \"environment\": {\n",
      "        \"HOLOSCAN_APPLICATION\": \"/opt/holoscan/app\",\n",
      "        \"HOLOSCAN_INPUT_PATH\": \"input/\",\n",
      "        \"HOLOSCAN_OUTPUT_PATH\": \"output/\",\n",
      "        \"HOLOSCAN_WORKDIR\": \"/var/holoscan\",\n",
      "        \"HOLOSCAN_MODEL_PATH\": \"/opt/holoscan/models\",\n",
      "        \"HOLOSCAN_CONFIG_PATH\": \"/var/holoscan/app.yaml\",\n",
      "        \"HOLOSCAN_APP_MANIFEST_PATH\": \"/etc/holoscan/app.json\",\n",
      "        \"HOLOSCAN_PKG_MANIFEST_PATH\": \"/etc/holoscan/pkg.json\",\n",
      "        \"HOLOSCAN_DOCS_PATH\": \"/opt/holoscan/docs\",\n",
      "        \"HOLOSCAN_LOGS_PATH\": \"/var/holoscan/logs\"\n",
      "    },\n",
      "    \"input\": {\n",
      "        \"path\": \"input/\",\n",
      "        \"formats\": null\n",
      "    },\n",
      "    \"liveness\": null,\n",
      "    \"output\": {\n",
      "        \"path\": \"output/\",\n",
      "        \"formats\": null\n",
      "    },\n",
      "    \"readiness\": null,\n",
      "    \"sdk\": \"monai-deploy\",\n",
      "    \"sdkVersion\": \"0.5.1\",\n",
      "    \"timeout\": 0,\n",
      "    \"version\": 1.0,\n",
      "    \"workingDirectory\": \"/var/holoscan\"\n",
      "}\n",
      "================ End app.json ================\n",
      "                 \n",
      "[2025-04-22 10:07:06,484] [DEBUG] (common) - \n",
      "=============== Begin pkg.json ===============\n",
      "{\n",
      "    \"apiVersion\": \"1.0.0\",\n",
      "    \"applicationRoot\": \"/opt/holoscan/app\",\n",
      "    \"modelRoot\": \"/opt/holoscan/models\",\n",
      "    \"models\": {\n",
      "        \"model\": \"/opt/holoscan/models/model\"\n",
      "    },\n",
      "    \"resources\": {\n",
      "        \"cpu\": 1,\n",
      "        \"gpu\": 1,\n",
      "        \"memory\": \"1Gi\",\n",
      "        \"gpuMemory\": \"6Gi\"\n",
      "    },\n",
      "    \"version\": 1.0,\n",
      "    \"platformConfig\": \"dgpu\"\n",
      "}\n",
      "================ End pkg.json ================\n",
      "                 \n",
      "[2025-04-22 10:07:06,504] [DEBUG] (packager.builder) - \n",
      "========== Begin Build Parameters ==========\n",
      "{'additional_lib_paths': '',\n",
      " 'app_config_file_path': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/app.yaml'),\n",
      " 'app_dir': PosixPath('/opt/holoscan/app'),\n",
      " 'app_json': '/etc/holoscan/app.json',\n",
      " 'application': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app'),\n",
      " 'application_directory': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app'),\n",
      " 'application_type': 'PythonModule',\n",
      " 'build_cache': PosixPath('/home/mqin/.holoscan_build_cache'),\n",
      " 'cmake_args': '',\n",
      " 'command': '[\"python3\", \"/opt/holoscan/app\"]',\n",
      " 'command_filename': 'my_app',\n",
      " 'config_file_path': PosixPath('/var/holoscan/app.yaml'),\n",
      " 'docs_dir': PosixPath('/opt/holoscan/docs'),\n",
      " 'full_input_path': PosixPath('/var/holoscan/input'),\n",
      " 'full_output_path': PosixPath('/var/holoscan/output'),\n",
      " 'gid': 1000,\n",
      " 'holoscan_sdk_version': '3.1.0',\n",
      " 'includes': [],\n",
      " 'input_dir': 'input/',\n",
      " 'lib_dir': PosixPath('/opt/holoscan/lib'),\n",
      " 'logs_dir': PosixPath('/var/holoscan/logs'),\n",
      " 'models': {'model': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/models/model')},\n",
      " 'models_dir': PosixPath('/opt/holoscan/models'),\n",
      " 'monai_deploy_app_sdk_version': '0.5.1',\n",
      " 'no_cache': False,\n",
      " 'output_dir': 'output/',\n",
      " 'pip_packages': None,\n",
      " 'pkg_json': '/etc/holoscan/pkg.json',\n",
      " 'requirements_file_path': PosixPath('/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/my_app/requirements.txt'),\n",
      " 'sdk': <SdkType.MonaiDeploy: 'monai-deploy'>,\n",
      " 'sdk_type': 'monai-deploy',\n",
      " 'tarball_output': None,\n",
      " 'timeout': 0,\n",
      " 'title': 'MONAI Deploy App Package - MONAI Bundle AI App',\n",
      " 'uid': 1000,\n",
      " 'username': 'holoscan',\n",
      " 'version': 1.0,\n",
      " 'working_dir': PosixPath('/var/holoscan')}\n",
      "=========== End Build Parameters ===========\n",
      "\n",
      "[2025-04-22 10:07:06,504] [DEBUG] (packager.builder) - \n",
      "========== Begin Platform Parameters ==========\n",
      "{'base_image': 'nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04',\n",
      " 'build_image': None,\n",
      " 'cuda_deb_arch': 'x86_64',\n",
      " 'custom_base_image': False,\n",
      " 'custom_holoscan_sdk': False,\n",
      " 'custom_monai_deploy_sdk': True,\n",
      " 'gpu_type': 'dgpu',\n",
      " 'holoscan_deb_arch': 'amd64',\n",
      " 'holoscan_sdk_file': '3.1.0',\n",
      " 'holoscan_sdk_filename': '3.1.0',\n",
      " 'monai_deploy_sdk_file': PosixPath('/home/mqin/src/monai-deploy-app-sdk/dist/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl'),\n",
      " 'monai_deploy_sdk_filename': 'monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl',\n",
      " 'tag': 'my_app:1.0',\n",
      " 'target_arch': 'x86_64'}\n",
      "=========== End Platform Parameters ===========\n",
      "\n",
      "[2025-04-22 10:07:06,521] [DEBUG] (packager.builder) - \n",
      "========== Begin Dockerfile ==========\n",
      "\n",
      "ARG GPU_TYPE=dgpu\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "FROM nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04 AS base\n",
      "\n",
      "RUN apt-get update \\\n",
      "    && apt-get install -y --no-install-recommends --no-install-suggests \\\n",
      "        curl \\\n",
      "        jq \\\n",
      "    && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "# FROM base AS mofed-installer\n",
      "# ARG MOFED_VERSION=23.10-2.1.3.1\n",
      "\n",
      "# # In a container, we only need to install the user space libraries, though the drivers are still\n",
      "# # needed on the host.\n",
      "# # Note: MOFED's installation is not easily portable, so we can't copy the output of this stage\n",
      "# # to our final stage, but must inherit from it. For that reason, we keep track of the build/install\n",
      "# # only dependencies in the `MOFED_DEPS` variable (parsing the output of `--check-deps-only`) to\n",
      "# # remove them in that same layer, to ensure they are not propagated in the final image.\n",
      "# WORKDIR /opt/nvidia/mofed\n",
      "# ARG MOFED_INSTALL_FLAGS=\"--dpdk --with-mft --user-space-only --force --without-fw-update\"\n",
      "# RUN UBUNTU_VERSION=$(cat /etc/lsb-release | grep DISTRIB_RELEASE | cut -d= -f2) \\\n",
      "#     && OFED_PACKAGE=\"MLNX_OFED_LINUX-${MOFED_VERSION}-ubuntu${UBUNTU_VERSION}-$(uname -m)\" \\\n",
      "#     && curl -S -# -o ${OFED_PACKAGE}.tgz -L \\\n",
      "#         https://www.mellanox.com/downloads/ofed/MLNX_OFED-${MOFED_VERSION}/${OFED_PACKAGE}.tgz \\\n",
      "#     && tar xf ${OFED_PACKAGE}.tgz \\\n",
      "#     && MOFED_INSTALLER=$(find . -name mlnxofedinstall -type f -executable -print) \\\n",
      "#     && MOFED_DEPS=$(${MOFED_INSTALLER} ${MOFED_INSTALL_FLAGS} --check-deps-only 2>/dev/null | tail -n1 |  cut -d' ' -f3-) \\\n",
      "#     && apt-get update \\\n",
      "#     && apt-get install --no-install-recommends -y ${MOFED_DEPS} \\\n",
      "#     && ${MOFED_INSTALLER} ${MOFED_INSTALL_FLAGS} \\\n",
      "#     && rm -r * \\\n",
      "#     && apt-get remove -y ${MOFED_DEPS} && apt-get autoremove -y \\\n",
      "#     && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "FROM base AS release\n",
      "ENV DEBIAN_FRONTEND=noninteractive\n",
      "ENV TERM=xterm-256color\n",
      "\n",
      "ARG GPU_TYPE\n",
      "ARG UNAME\n",
      "ARG UID\n",
      "ARG GID\n",
      "\n",
      "RUN mkdir -p /etc/holoscan/ \\\n",
      "        && mkdir -p /opt/holoscan/ \\\n",
      "        && mkdir -p /var/holoscan \\\n",
      "        && mkdir -p /opt/holoscan/app \\\n",
      "        && mkdir -p /var/holoscan/input \\\n",
      "        && mkdir -p /var/holoscan/output\n",
      "\n",
      "LABEL base=\"nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\"\n",
      "LABEL tag=\"my_app:1.0\"\n",
      "LABEL org.opencontainers.image.title=\"MONAI Deploy App Package - MONAI Bundle AI App\"\n",
      "LABEL org.opencontainers.image.version=\"1.0\"\n",
      "LABEL org.nvidia.holoscan=\"3.1.0\"\n",
      "\n",
      "LABEL org.monai.deploy.app-sdk=\"0.5.1\"\n",
      "\n",
      "ENV HOLOSCAN_INPUT_PATH=/var/holoscan/input\n",
      "ENV HOLOSCAN_OUTPUT_PATH=/var/holoscan/output\n",
      "ENV HOLOSCAN_WORKDIR=/var/holoscan\n",
      "ENV HOLOSCAN_APPLICATION=/opt/holoscan/app\n",
      "ENV HOLOSCAN_TIMEOUT=0\n",
      "ENV HOLOSCAN_MODEL_PATH=/opt/holoscan/models\n",
      "ENV HOLOSCAN_DOCS_PATH=/opt/holoscan/docs\n",
      "ENV HOLOSCAN_CONFIG_PATH=/var/holoscan/app.yaml\n",
      "ENV HOLOSCAN_APP_MANIFEST_PATH=/etc/holoscan/app.json\n",
      "ENV HOLOSCAN_PKG_MANIFEST_PATH=/etc/holoscan/pkg.json\n",
      "ENV HOLOSCAN_LOGS_PATH=/var/holoscan/logs\n",
      "ENV HOLOSCAN_VERSION=3.1.0\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "# If torch is installed, we can skip installing Python\n",
      "ENV PYTHON_VERSION=3.10.6-1~22.04\n",
      "ENV PYTHON_PIP_VERSION=22.0.2+dfsg-*\n",
      "\n",
      "RUN apt update \\\n",
      "    && apt-get install -y --no-install-recommends --no-install-suggests \\\n",
      "        python3-minimal=${PYTHON_VERSION} \\\n",
      "        libpython3-stdlib=${PYTHON_VERSION} \\\n",
      "        python3=${PYTHON_VERSION} \\\n",
      "        python3-venv=${PYTHON_VERSION} \\\n",
      "        python3-pip=${PYTHON_PIP_VERSION} \\\n",
      "    && rm -rf /var/lib/apt/lists/*\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "RUN groupadd -f -g $GID $UNAME\n",
      "RUN useradd -rm -d /home/$UNAME -s /bin/bash -g $GID -G sudo -u $UID $UNAME\n",
      "RUN chown -R holoscan /var/holoscan && \\\n",
      "    chown -R holoscan /var/holoscan/input && \\\n",
      "    chown -R holoscan /var/holoscan/output\n",
      "\n",
      "# Set the working directory\n",
      "WORKDIR /var/holoscan\n",
      "\n",
      "# Copy HAP/MAP tool script\n",
      "COPY ./tools /var/holoscan/tools\n",
      "RUN chmod +x /var/holoscan/tools\n",
      "\n",
      "# Set the working directory\n",
      "WORKDIR /var/holoscan\n",
      "\n",
      "USER $UNAME\n",
      "\n",
      "ENV PATH=/home/${UNAME}/.local/bin:/opt/nvidia/holoscan/bin:$PATH\n",
      "ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/${UNAME}/.local/lib/python3.10/site-packages/holoscan/lib\n",
      "\n",
      "COPY ./pip/requirements.txt /tmp/requirements.txt\n",
      "\n",
      "RUN pip install --upgrade pip\n",
      "RUN pip install --no-cache-dir --user -r /tmp/requirements.txt\n",
      "\n",
      "\n",
      "# Install MONAI Deploy App SDK\n",
      "# Copy user-specified MONAI Deploy SDK file\n",
      "COPY ./monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl /tmp/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "RUN pip install /tmp/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "\n",
      "COPY ./models  /opt/holoscan/models\n",
      "\n",
      "\n",
      "COPY ./map/app.json /etc/holoscan/app.json\n",
      "COPY ./app.config /var/holoscan/app.yaml\n",
      "COPY ./map/pkg.json /etc/holoscan/pkg.json\n",
      "\n",
      "COPY ./app /opt/holoscan/app\n",
      "\n",
      "\n",
      "ENTRYPOINT [\"/var/holoscan/tools\"]\n",
      "=========== End Dockerfile ===========\n",
      "\n",
      "[2025-04-22 10:07:06,522] [INFO] (packager.builder) - \n",
      "===============================================================================\n",
      "Building image for:                 x64-workstation\n",
      "    Architecture:                   linux/amd64\n",
      "    Base Image:                     nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "    Build Image:                    N/A\n",
      "    Cache:                          Enabled\n",
      "    Configuration:                  dgpu\n",
      "    Holoscan SDK Package:           3.1.0\n",
      "    MONAI Deploy App SDK Package:   /home/mqin/src/monai-deploy-app-sdk/dist/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "    gRPC Health Probe:              N/A\n",
      "    SDK Version:                    3.1.0\n",
      "    SDK:                            monai-deploy\n",
      "    Tag:                            my_app-x64-workstation-dgpu-linux-amd64:1.0\n",
      "    Included features/dependencies: N/A\n",
      "    \n",
      "[2025-04-22 10:07:06,841] [INFO] (common) - Using existing Docker BuildKit builder `holoscan_app_builder`\n",
      "[2025-04-22 10:07:06,841] [DEBUG] (packager.builder) - Building Holoscan Application Package: tag=my_app-x64-workstation-dgpu-linux-amd64:1.0\n",
      "#0 building with \"holoscan_app_builder\" instance using docker-container driver\n",
      "\n",
      "#1 [internal] load build definition from Dockerfile\n",
      "#1 transferring dockerfile: 4.74kB done\n",
      "#1 DONE 0.1s\n",
      "\n",
      "#2 [auth] nvidia/cuda:pull token for nvcr.io\n",
      "#2 DONE 0.0s\n",
      "\n",
      "#3 [internal] load metadata for nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#3 DONE 0.4s\n",
      "\n",
      "#4 [internal] load .dockerignore\n",
      "#4 transferring context: 1.80kB done\n",
      "#4 DONE 0.1s\n",
      "\n",
      "#5 importing cache manifest from nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#5 ...\n",
      "\n",
      "#6 [internal] load build context\n",
      "#6 DONE 0.0s\n",
      "\n",
      "#7 importing cache manifest from local:9106061615573359344\n",
      "#7 inferred cache manifest type: application/vnd.oci.image.index.v1+json done\n",
      "#7 DONE 0.0s\n",
      "\n",
      "#8 [base 1/2] FROM nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04@sha256:22fc009e5cea0b8b91d94c99fdd419d2366810b5ea835e47b8343bc15800c186\n",
      "#8 resolve nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04@sha256:22fc009e5cea0b8b91d94c99fdd419d2366810b5ea835e47b8343bc15800c186 0.1s done\n",
      "#8 DONE 0.1s\n",
      "\n",
      "#5 importing cache manifest from nvcr.io/nvidia/cuda:12.6.0-runtime-ubuntu22.04\n",
      "#5 inferred cache manifest type: application/vnd.docker.distribution.manifest.list.v2+json done\n",
      "#5 DONE 0.3s\n",
      "\n",
      "#6 [internal] load build context\n",
      "#6 transferring context: 19.58MB 0.1s done\n",
      "#6 DONE 0.3s\n",
      "\n",
      "#9 [release  4/19] RUN useradd -rm -d /home/holoscan -s /bin/bash -g 1000 -G sudo -u 1000 holoscan\n",
      "#9 CACHED\n",
      "\n",
      "#10 [release  5/19] RUN chown -R holoscan /var/holoscan &&     chown -R holoscan /var/holoscan/input &&     chown -R holoscan /var/holoscan/output\n",
      "#10 CACHED\n",
      "\n",
      "#11 [release  2/19] RUN apt update     && apt-get install -y --no-install-recommends --no-install-suggests         python3-minimal=3.10.6-1~22.04         libpython3-stdlib=3.10.6-1~22.04         python3=3.10.6-1~22.04         python3-venv=3.10.6-1~22.04         python3-pip=22.0.2+dfsg-*     && rm -rf /var/lib/apt/lists/*\n",
      "#11 CACHED\n",
      "\n",
      "#12 [release  3/19] RUN groupadd -f -g 1000 holoscan\n",
      "#12 CACHED\n",
      "\n",
      "#13 [base 2/2] RUN apt-get update     && apt-get install -y --no-install-recommends --no-install-suggests         curl         jq     && rm -rf /var/lib/apt/lists/*\n",
      "#13 CACHED\n",
      "\n",
      "#14 [release  1/19] RUN mkdir -p /etc/holoscan/         && mkdir -p /opt/holoscan/         && mkdir -p /var/holoscan         && mkdir -p /opt/holoscan/app         && mkdir -p /var/holoscan/input         && mkdir -p /var/holoscan/output\n",
      "#14 CACHED\n",
      "\n",
      "#15 [release  6/19] WORKDIR /var/holoscan\n",
      "#15 CACHED\n",
      "\n",
      "#16 [release  7/19] COPY ./tools /var/holoscan/tools\n",
      "#16 CACHED\n",
      "\n",
      "#17 [release  8/19] RUN chmod +x /var/holoscan/tools\n",
      "#17 CACHED\n",
      "\n",
      "#18 [release  9/19] WORKDIR /var/holoscan\n",
      "#18 CACHED\n",
      "\n",
      "#19 [release 10/19] COPY ./pip/requirements.txt /tmp/requirements.txt\n",
      "#19 DONE 1.4s\n",
      "\n",
      "#20 [release 11/19] RUN pip install --upgrade pip\n",
      "#20 0.781 Defaulting to user installation because normal site-packages is not writeable\n",
      "#20 0.831 Requirement already satisfied: pip in /usr/lib/python3/dist-packages (22.0.2)\n",
      "#20 0.968 Collecting pip\n",
      "#20 1.063   Downloading pip-25.0.1-py3-none-any.whl (1.8 MB)\n",
      "#20 1.220      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 12.3 MB/s eta 0:00:00\n",
      "#20 1.247 Installing collected packages: pip\n",
      "#20 1.992 Successfully installed pip-25.0.1\n",
      "#20 DONE 2.2s\n",
      "\n",
      "#21 [release 12/19] RUN pip install --no-cache-dir --user -r /tmp/requirements.txt\n",
      "#21 0.752 Collecting highdicom>=0.18.2 (from -r /tmp/requirements.txt (line 1))\n",
      "#21 0.790   Downloading highdicom-0.25.1-py3-none-any.whl.metadata (5.0 kB)\n",
      "#21 0.829 Collecting monai>=1.0 (from -r /tmp/requirements.txt (line 2))\n",
      "#21 0.842   Downloading monai-1.4.0-py3-none-any.whl.metadata (11 kB)\n",
      "#21 0.945 Collecting nibabel>=3.2.1 (from -r /tmp/requirements.txt (line 3))\n",
      "#21 0.956   Downloading nibabel-5.3.2-py3-none-any.whl.metadata (9.1 kB)\n",
      "#21 1.134 Collecting numpy>=1.21.6 (from -r /tmp/requirements.txt (line 4))\n",
      "#21 1.144   Downloading numpy-2.2.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)\n",
      "#21 1.183 Collecting pydicom>=2.3.0 (from -r /tmp/requirements.txt (line 5))\n",
      "#21 1.195   Downloading pydicom-3.0.1-py3-none-any.whl.metadata (9.4 kB)\n",
      "#21 1.203 Requirement already satisfied: setuptools>=59.5.0 in /usr/lib/python3/dist-packages (from -r /tmp/requirements.txt (line 6)) (59.6.0)\n",
      "#21 1.236 Collecting SimpleITK>=2.0.0 (from -r /tmp/requirements.txt (line 7))\n",
      "#21 1.247   Downloading SimpleITK-2.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.9 kB)\n",
      "#21 1.295 Collecting torch>=1.12.0 (from -r /tmp/requirements.txt (line 8))\n",
      "#21 1.307   Downloading torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl.metadata (28 kB)\n",
      "#21 1.462 Collecting pillow>=8.3 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.474   Downloading pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (8.9 kB)\n",
      "#21 1.580 Collecting pyjpegls>=1.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.658   Downloading pyjpegls-1.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 1.694 Collecting typing-extensions>=4.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 1.706   Downloading typing_extensions-4.13.2-py3-none-any.whl.metadata (3.0 kB)\n",
      "#21 1.723 Collecting numpy>=1.21.6 (from -r /tmp/requirements.txt (line 4))\n",
      "#21 1.734   Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)\n",
      "#21 1.773 Collecting importlib-resources>=5.12 (from nibabel>=3.2.1->-r /tmp/requirements.txt (line 3))\n",
      "#21 1.784   Downloading importlib_resources-6.5.2-py3-none-any.whl.metadata (3.9 kB)\n",
      "#21 1.865 Collecting packaging>=20 (from nibabel>=3.2.1->-r /tmp/requirements.txt (line 3))\n",
      "#21 1.876   Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB)\n",
      "#21 1.922 Collecting filelock (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.932   Downloading filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)\n",
      "#21 1.984 Collecting networkx (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 1.995   Downloading networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)\n",
      "#21 2.036 Collecting jinja2 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.046   Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)\n",
      "#21 2.076 Collecting fsspec (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.087   Downloading fsspec-2025.3.2-py3-none-any.whl.metadata (11 kB)\n",
      "#21 2.150 Collecting nvidia-cuda-nvrtc-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.163   Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.182 Collecting nvidia-cuda-runtime-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.194   Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.220 Collecting nvidia-cuda-cupti-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.232   Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.266 Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.277   Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.303 Collecting nvidia-cublas-cu12==12.4.5.8 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.316   Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.343 Collecting nvidia-cufft-cu12==11.2.1.3 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.355   Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.376 Collecting nvidia-curand-cu12==10.3.5.147 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.389   Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.415 Collecting nvidia-cusolver-cu12==11.6.1.9 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.427   Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.455 Collecting nvidia-cusparse-cu12==12.3.1.170 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.467   Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "#21 2.486 Collecting nvidia-cusparselt-cu12==0.6.2 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.499   Downloading nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)\n",
      "#21 2.522 Collecting nvidia-nccl-cu12==2.21.5 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.535   Downloading nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)\n",
      "#21 2.561 Collecting nvidia-nvtx-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.574   Downloading nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.7 kB)\n",
      "#21 2.599 Collecting nvidia-nvjitlink-cu12==12.4.127 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.612   Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "#21 2.640 Collecting triton==3.2.0 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.653   Downloading triton-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.4 kB)\n",
      "#21 2.688 Collecting sympy==1.13.1 (from torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.699   Downloading sympy-1.13.1-py3-none-any.whl.metadata (12 kB)\n",
      "#21 2.734 Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.745   Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)\n",
      "#21 2.771 INFO: pip is looking at multiple versions of pyjpegls to determine which version is compatible with other requirements. This could take a while.\n",
      "#21 2.771 Collecting pyjpegls>=1.0.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 1))\n",
      "#21 2.783   Downloading pyjpegls-1.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 2.798   Downloading pyjpegls-1.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.5 kB)\n",
      "#21 2.868 Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.12.0->-r /tmp/requirements.txt (line 8))\n",
      "#21 2.879   Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)\n",
      "#21 2.896 Downloading highdicom-0.25.1-py3-none-any.whl (1.1 MB)\n",
      "#21 2.925    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 52.9 MB/s eta 0:00:00\n",
      "#21 2.944 Downloading monai-1.4.0-py3-none-any.whl (1.5 MB)\n",
      "#21 2.962    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 125.9 MB/s eta 0:00:00\n",
      "#21 2.976 Downloading nibabel-5.3.2-py3-none-any.whl (3.3 MB)\n",
      "#21 3.029    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 70.7 MB/s eta 0:00:00\n",
      "#21 3.043 Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)\n",
      "#21 3.293    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 74.5 MB/s eta 0:00:00\n",
      "#21 3.311 Downloading pydicom-3.0.1-py3-none-any.whl (2.4 MB)\n",
      "#21 3.347    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 74.4 MB/s eta 0:00:00\n",
      "#21 3.360 Downloading SimpleITK-2.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (52.4 MB)\n",
      "#21 4.004    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.4/52.4 MB 81.9 MB/s eta 0:00:00\n",
      "#21 4.020 Downloading torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl (766.7 MB)\n",
      "#21 11.28    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 766.7/766.7 MB 102.9 MB/s eta 0:00:00\n",
      "#21 11.30 Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl (363.4 MB)\n",
      "#21 15.09    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 363.4/363.4 MB 102.0 MB/s eta 0:00:00\n",
      "#21 15.10 Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (13.8 MB)\n",
      "#21 15.23    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.8/13.8 MB 113.9 MB/s eta 0:00:00\n",
      "#21 15.25 Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (24.6 MB)\n",
      "#21 15.47    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 115.1 MB/s eta 0:00:00\n",
      "#21 15.48 Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (883 kB)\n",
      "#21 15.49    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 883.7/883.7 kB 240.5 MB/s eta 0:00:00\n",
      "#21 15.51 Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB)\n",
      "#21 21.44    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 664.8/664.8 MB 116.6 MB/s eta 0:00:00\n",
      "#21 21.46 Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl (211.5 MB)\n",
      "#21 23.47    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 211.5/211.5 MB 105.6 MB/s eta 0:00:00\n",
      "#21 23.48 Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl (56.3 MB)\n",
      "#21 23.98    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 MB 114.1 MB/s eta 0:00:00\n",
      "#21 23.99 Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl (127.9 MB)\n",
      "#21 25.10    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 127.9/127.9 MB 116.7 MB/s eta 0:00:00\n",
      "#21 25.11 Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl (207.5 MB)\n",
      "#21 27.11    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 207.5/207.5 MB 104.4 MB/s eta 0:00:00\n",
      "#21 27.13 Downloading nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl (150.1 MB)\n",
      "#21 28.49    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 150.1/150.1 MB 110.5 MB/s eta 0:00:00\n",
      "#21 28.50 Downloading nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl (188.7 MB)\n",
      "#21 30.12    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 188.7/188.7 MB 117.2 MB/s eta 0:00:00\n",
      "#21 30.13 Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n",
      "#21 30.35    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.1/21.1 MB 100.3 MB/s eta 0:00:00\n",
      "#21 30.37 Downloading nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (99 kB)\n",
      "#21 30.38 Downloading sympy-1.13.1-py3-none-any.whl (6.2 MB)\n",
      "#21 30.45    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 97.0 MB/s eta 0:00:00\n",
      "#21 30.47 Downloading triton-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (253.1 MB)\n",
      "#21 33.32    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 253.1/253.1 MB 88.9 MB/s eta 0:00:00\n",
      "#21 33.34 Downloading importlib_resources-6.5.2-py3-none-any.whl (37 kB)\n",
      "#21 33.35 Downloading packaging-25.0-py3-none-any.whl (66 kB)\n",
      "#21 33.37 Downloading pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl (4.6 MB)\n",
      "#21 33.42    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 83.5 MB/s eta 0:00:00\n",
      "#21 33.44 Downloading pyjpegls-1.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.7 MB)\n",
      "#21 33.47    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7/2.7 MB 94.7 MB/s eta 0:00:00\n",
      "#21 33.48 Downloading typing_extensions-4.13.2-py3-none-any.whl (45 kB)\n",
      "#21 33.50 Downloading filelock-3.18.0-py3-none-any.whl (16 kB)\n",
      "#21 33.51 Downloading fsspec-2025.3.2-py3-none-any.whl (194 kB)\n",
      "#21 33.53 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)\n",
      "#21 33.54 Downloading networkx-3.4.2-py3-none-any.whl (1.7 MB)\n",
      "#21 33.56    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 91.2 MB/s eta 0:00:00\n",
      "#21 33.58 Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)\n",
      "#21 33.59 Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)\n",
      "#21 33.60    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 126.9 MB/s eta 0:00:00\n",
      "#21 40.78 Installing collected packages: triton, SimpleITK, nvidia-cusparselt-cu12, mpmath, typing-extensions, sympy, pydicom, pillow, packaging, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, MarkupSafe, importlib-resources, fsspec, filelock, pyjpegls, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nibabel, jinja2, nvidia-cusolver-cu12, highdicom, torch, monai\n",
      "#21 93.44 Successfully installed MarkupSafe-3.0.2 SimpleITK-2.4.1 filelock-3.18.0 fsspec-2025.3.2 highdicom-0.25.1 importlib-resources-6.5.2 jinja2-3.1.6 monai-1.4.0 mpmath-1.3.0 networkx-3.4.2 nibabel-5.3.2 numpy-1.26.4 nvidia-cublas-cu12-12.4.5.8 nvidia-cuda-cupti-cu12-12.4.127 nvidia-cuda-nvrtc-cu12-12.4.127 nvidia-cuda-runtime-cu12-12.4.127 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-cusparselt-cu12-0.6.2 nvidia-nccl-cu12-2.21.5 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.4.127 packaging-25.0 pillow-11.2.1 pydicom-3.0.1 pyjpegls-1.4.0 sympy-1.13.1 torch-2.6.0 triton-3.2.0 typing-extensions-4.13.2\n",
      "#21 DONE 98.6s\n",
      "\n",
      "#22 [release 13/19] COPY ./monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl /tmp/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "#22 DONE 0.3s\n",
      "\n",
      "#23 [release 14/19] RUN pip install /tmp/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "#23 0.726 Defaulting to user installation because normal site-packages is not writeable\n",
      "#23 0.853 Processing /tmp/monai_deploy_app_sdk-0.5.1+37.g96f7e31.dirty-py3-none-any.whl\n",
      "#23 0.864 Requirement already satisfied: numpy>=1.21.6 in /home/holoscan/.local/lib/python3.10/site-packages (from monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (1.26.4)\n",
      "#23 1.007 Collecting holoscan~=3.0 (from monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.044   Downloading holoscan-3.1.0-cp310-cp310-manylinux_2_35_x86_64.whl.metadata (7.0 kB)\n",
      "#23 1.097 Collecting holoscan-cli~=3.0 (from monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.111   Downloading holoscan_cli-3.1.0-py3-none-any.whl.metadata (4.0 kB)\n",
      "#23 1.185 Collecting colorama>=0.4.1 (from monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.196   Downloading colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)\n",
      "#23 1.272 Collecting tritonclient>=2.53.0 (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.283   Downloading tritonclient-2.56.0-py3-none-manylinux1_x86_64.whl.metadata (2.8 kB)\n",
      "#23 1.381 Collecting typeguard>=3.0.0 (from monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.395   Downloading typeguard-4.4.2-py3-none-any.whl.metadata (3.8 kB)\n",
      "#23 1.423 Requirement already satisfied: pip>22.0.2 in /home/holoscan/.local/lib/python3.10/site-packages (from holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (25.0.1)\n",
      "#23 1.468 Collecting cupy-cuda12x<14.0,>=12.2 (from holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.483   Downloading cupy_cuda12x-13.4.1-cp310-cp310-manylinux2014_x86_64.whl.metadata (2.6 kB)\n",
      "#23 1.553 Collecting cloudpickle<4.0,>=3.0 (from holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.566   Downloading cloudpickle-3.1.1-py3-none-any.whl.metadata (7.1 kB)\n",
      "#23 1.687 Collecting wheel-axle-runtime<1.0 (from holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.702   Downloading wheel_axle_runtime-0.0.6-py3-none-any.whl.metadata (8.1 kB)\n",
      "#23 1.726 Requirement already satisfied: Jinja2<4.0.0,>=3.1.5 in /home/holoscan/.local/lib/python3.10/site-packages (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (3.1.6)\n",
      "#23 1.775 Collecting packaging<24.0,>=23.1 (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.788   Downloading packaging-23.2-py3-none-any.whl.metadata (3.2 kB)\n",
      "#23 1.916 Collecting psutil<7.0.0,>=6.0.0 (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 1.927   Downloading psutil-6.1.1-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (22 kB)\n",
      "#23 2.013 Collecting python-on-whales<0.61.0,>=0.60.1 (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.025   Downloading python_on_whales-0.60.1-py3-none-any.whl.metadata (16 kB)\n",
      "#23 2.113 Collecting pyyaml<7.0,>=6.0 (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.124   Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)\n",
      "#23 2.202 Collecting requests<3.0.0,>=2.31.0 (from holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.213   Downloading requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)\n",
      "#23 2.352 Collecting python-rapidjson>=0.9.1 (from tritonclient>=2.53.0->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.364   Downloading python_rapidjson-1.20-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (22 kB)\n",
      "#23 2.445 Collecting urllib3>=2.0.7 (from tritonclient>=2.53.0->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.456   Downloading urllib3-2.4.0-py3-none-any.whl.metadata (6.5 kB)\n",
      "#23 2.887 Collecting aiohttp<4.0.0,>=3.8.1 (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 2.898   Downloading aiohttp-3.11.18-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.7 kB)\n",
      "#23 2.991 Collecting cuda-python (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 3.002   Downloading cuda_python-12.8.0-py3-none-any.whl.metadata (15 kB)\n",
      "#23 3.159 Collecting geventhttpclient>=2.3.3 (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 3.170   Downloading geventhttpclient-2.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.7 kB)\n",
      "#23 3.657 Collecting grpcio<1.68,>=1.63.0 (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 3.669   Downloading grpcio-1.67.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.9 kB)\n",
      "#23 3.979 Collecting protobuf<6.0dev,>=5.26.1 (from tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 3.990   Downloading protobuf-5.29.4-cp38-abi3-manylinux2014_x86_64.whl.metadata (592 bytes)\n",
      "#23 4.008 Requirement already satisfied: typing_extensions>=4.10.0 in /home/holoscan/.local/lib/python3.10/site-packages (from typeguard>=3.0.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (4.13.2)\n",
      "#23 4.054 Collecting aiohappyeyeballs>=2.3.0 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.067   Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)\n",
      "#23 4.125 Collecting aiosignal>=1.1.2 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.138   Downloading aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB)\n",
      "#23 4.199 Collecting async-timeout<6.0,>=4.0 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.212   Downloading async_timeout-5.0.1-py3-none-any.whl.metadata (5.1 kB)\n",
      "#23 4.277 Collecting attrs>=17.3.0 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.289   Downloading attrs-25.3.0-py3-none-any.whl.metadata (10 kB)\n",
      "#23 4.407 Collecting frozenlist>=1.1.1 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.417   Downloading frozenlist-1.6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB)\n",
      "#23 4.625 Collecting multidict<7.0,>=4.5 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.636   Downloading multidict-6.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB)\n",
      "#23 4.722 Collecting propcache>=0.2.0 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 4.733   Downloading propcache-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)\n",
      "#23 5.044 Collecting yarl<2.0,>=1.17.0 (from aiohttp<4.0.0,>=3.8.1->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.055   Downloading yarl-1.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB)\n",
      "#23 5.139 Collecting fastrlock>=0.5 (from cupy-cuda12x<14.0,>=12.2->holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.150   Downloading fastrlock-0.8.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_28_x86_64.whl.metadata (7.7 kB)\n",
      "#23 5.306 Collecting gevent (from geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.317   Downloading gevent-25.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (13 kB)\n",
      "#23 5.398 Collecting certifi (from geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.411   Downloading certifi-2025.1.31-py3-none-any.whl.metadata (2.5 kB)\n",
      "#23 5.486 Collecting brotli (from geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.498   Downloading Brotli-1.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (5.5 kB)\n",
      "#23 5.527 Requirement already satisfied: MarkupSafe>=2.0 in /home/holoscan/.local/lib/python3.10/site-packages (from Jinja2<4.0.0,>=3.1.5->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (3.0.2)\n",
      "#23 5.679 Collecting pydantic<2,>=1.5 (from python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.691   Downloading pydantic-1.10.21-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (153 kB)\n",
      "#23 5.793 Collecting tqdm (from python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.804   Downloading tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)\n",
      "#23 5.917 Collecting typer>=0.4.1 (from python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 5.928   Downloading typer-0.15.2-py3-none-any.whl.metadata (15 kB)\n",
      "#23 6.053 Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.31.0->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.064   Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)\n",
      "#23 6.138 Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.31.0->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.150   Downloading idna-3.10-py3-none-any.whl.metadata (10 kB)\n",
      "#23 6.201 Requirement already satisfied: filelock in /home/holoscan/.local/lib/python3.10/site-packages (from wheel-axle-runtime<1.0->holoscan~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (3.18.0)\n",
      "#23 6.232 Collecting cuda-bindings~=12.8.0 (from cuda-python->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.247   Downloading cuda_bindings-12.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (13 kB)\n",
      "#23 6.352 Collecting click>=8.0.0 (from typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.363   Downloading click-8.1.8-py3-none-any.whl.metadata (2.3 kB)\n",
      "#23 6.427 Collecting shellingham>=1.3.0 (from typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.438   Downloading shellingham-1.5.4-py2.py3-none-any.whl.metadata (3.5 kB)\n",
      "#23 6.530 Collecting rich>=10.11.0 (from typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.540   Downloading rich-14.0.0-py3-none-any.whl.metadata (18 kB)\n",
      "#23 6.781 Collecting greenlet>=3.2.0 (from gevent->geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.792   Downloading greenlet-3.2.1-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (4.1 kB)\n",
      "#23 6.843 Collecting zope.event (from gevent->geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 6.855   Downloading zope.event-5.0-py3-none-any.whl.metadata (4.4 kB)\n",
      "#23 7.003 Collecting zope.interface (from gevent->geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 7.014   Downloading zope.interface-7.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (44 kB)\n",
      "#23 7.104 Collecting markdown-it-py>=2.2.0 (from rich>=10.11.0->typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 7.116   Downloading markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)\n",
      "#23 7.189 Collecting pygments<3.0.0,>=2.13.0 (from rich>=10.11.0->typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 7.200   Downloading pygments-2.19.1-py3-none-any.whl.metadata (2.5 kB)\n",
      "#23 7.227 Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from zope.event->gevent->geventhttpclient>=2.3.3->tritonclient[all]>=2.53.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty) (59.6.0)\n",
      "#23 7.259 Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.11.0->typer>=0.4.1->python-on-whales<0.61.0,>=0.60.1->holoscan-cli~=3.0->monai-deploy-app-sdk==0.5.1+37.g96f7e31.dirty)\n",
      "#23 7.271   Downloading mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)\n",
      "#23 7.320 Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)\n",
      "#23 7.349 Downloading holoscan-3.1.0-cp310-cp310-manylinux_2_35_x86_64.whl (39.8 MB)\n",
      "#23 8.075    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.8/39.8 MB 55.3 MB/s eta 0:00:00\n",
      "#23 8.089 Downloading holoscan_cli-3.1.0-py3-none-any.whl (72 kB)\n",
      "#23 8.123 Downloading tritonclient-2.56.0-py3-none-manylinux1_x86_64.whl (14.4 MB)\n",
      "#23 8.397    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.4/14.4 MB 53.1 MB/s eta 0:00:00\n",
      "#23 8.411 Downloading typeguard-4.4.2-py3-none-any.whl (35 kB)\n",
      "#23 8.442 Downloading aiohttp-3.11.18-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)\n",
      "#23 8.495    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 29.5 MB/s eta 0:00:00\n",
      "#23 8.509 Downloading cloudpickle-3.1.1-py3-none-any.whl (20 kB)\n",
      "#23 8.541 Downloading cupy_cuda12x-13.4.1-cp310-cp310-manylinux2014_x86_64.whl (104.6 MB)\n",
      "#23 10.27    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 104.6/104.6 MB 60.5 MB/s eta 0:00:00\n",
      "#23 10.28 Downloading geventhttpclient-2.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (112 kB)\n",
      "#23 10.32 Downloading grpcio-1.67.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)\n",
      "#23 10.43    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 53.1 MB/s eta 0:00:00\n",
      "#23 10.44 Downloading packaging-23.2-py3-none-any.whl (53 kB)\n",
      "#23 10.47 Downloading protobuf-5.29.4-cp38-abi3-manylinux2014_x86_64.whl (319 kB)\n",
      "#23 10.51 Downloading psutil-6.1.1-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (287 kB)\n",
      "#23 10.54 Downloading python_on_whales-0.60.1-py3-none-any.whl (103 kB)\n",
      "#23 10.57 Downloading python_rapidjson-1.20-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.7 MB)\n",
      "#23 10.61    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 33.7 MB/s eta 0:00:00\n",
      "#23 10.63 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)\n",
      "#23 10.66    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 18.4 MB/s eta 0:00:00\n",
      "#23 10.68 Downloading requests-2.32.3-py3-none-any.whl (64 kB)\n",
      "#23 10.71 Downloading urllib3-2.4.0-py3-none-any.whl (128 kB)\n",
      "#23 10.74 Downloading wheel_axle_runtime-0.0.6-py3-none-any.whl (14 kB)\n",
      "#23 10.77 Downloading cuda_python-12.8.0-py3-none-any.whl (11 kB)\n",
      "#23 10.80 Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)\n",
      "#23 10.83 Downloading aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB)\n",
      "#23 10.87 Downloading async_timeout-5.0.1-py3-none-any.whl (6.2 kB)\n",
      "#23 10.90 Downloading attrs-25.3.0-py3-none-any.whl (63 kB)\n",
      "#23 10.93 Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)\n",
      "#23 10.97 Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)\n",
      "#23 11.00 Downloading cuda_bindings-12.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.4 MB)\n",
      "#23 11.20    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.4/11.4 MB 57.7 MB/s eta 0:00:00\n",
      "#23 11.22 Downloading fastrlock-0.8.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_28_x86_64.whl (53 kB)\n",
      "#23 11.25 Downloading frozenlist-1.6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (287 kB)\n",
      "#23 11.29 Downloading idna-3.10-py3-none-any.whl (70 kB)\n",
      "#23 11.32 Downloading multidict-6.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (219 kB)\n",
      "#23 11.35 Downloading propcache-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (206 kB)\n",
      "#23 11.39 Downloading pydantic-1.10.21-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB)\n",
      "#23 11.46    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 45.0 MB/s eta 0:00:00\n",
      "#23 11.48 Downloading typer-0.15.2-py3-none-any.whl (45 kB)\n",
      "#23 11.51 Downloading yarl-1.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (333 kB)\n",
      "#23 11.54 Downloading Brotli-1.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.0 MB)\n",
      "#23 11.61    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.0/3.0 MB 45.0 MB/s eta 0:00:00\n",
      "#23 11.63 Downloading gevent-25.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB)\n",
      "#23 11.69    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 39.6 MB/s eta 0:00:00\n",
      "#23 11.70 Downloading tqdm-4.67.1-py3-none-any.whl (78 kB)\n",
      "#23 11.73 Downloading click-8.1.8-py3-none-any.whl (98 kB)\n",
      "#23 11.77 Downloading greenlet-3.2.1-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (580 kB)\n",
      "#23 11.80    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 580.6/580.6 kB 14.2 MB/s eta 0:00:00\n",
      "#23 11.81 Downloading rich-14.0.0-py3-none-any.whl (243 kB)\n",
      "#23 11.85 Downloading shellingham-1.5.4-py2.py3-none-any.whl (9.8 kB)\n",
      "#23 11.88 Downloading zope.event-5.0-py3-none-any.whl (6.8 kB)\n",
      "#23 11.91 Downloading zope.interface-7.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (254 kB)\n",
      "#23 11.94 Downloading markdown_it_py-3.0.0-py3-none-any.whl (87 kB)\n",
      "#23 11.97 Downloading pygments-2.19.1-py3-none-any.whl (1.2 MB)\n",
      "#23 12.01    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 28.2 MB/s eta 0:00:00\n",
      "#23 12.03 Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)\n",
      "#23 12.84 Installing collected packages: fastrlock, cuda-bindings, brotli, zope.interface, zope.event, wheel-axle-runtime, urllib3, typeguard, tqdm, shellingham, pyyaml, python-rapidjson, pygments, pydantic, psutil, protobuf, propcache, packaging, multidict, mdurl, idna, grpcio, greenlet, frozenlist, cupy-cuda12x, cuda-python, colorama, cloudpickle, click, charset-normalizer, certifi, attrs, async-timeout, aiohappyeyeballs, yarl, tritonclient, requests, markdown-it-py, holoscan, gevent, aiosignal, rich, geventhttpclient, aiohttp, typer, python-on-whales, holoscan-cli, monai-deploy-app-sdk\n",
      "#23 14.78   Attempting uninstall: packaging\n",
      "#23 14.78     Found existing installation: packaging 25.0\n",
      "#23 14.79     Uninstalling packaging-25.0:\n",
      "#23 14.81       Successfully uninstalled packaging-25.0\n",
      "#23 19.62 Successfully installed aiohappyeyeballs-2.6.1 aiohttp-3.11.18 aiosignal-1.3.2 async-timeout-5.0.1 attrs-25.3.0 brotli-1.1.0 certifi-2025.1.31 charset-normalizer-3.4.1 click-8.1.8 cloudpickle-3.1.1 colorama-0.4.6 cuda-bindings-12.8.0 cuda-python-12.8.0 cupy-cuda12x-13.4.1 fastrlock-0.8.3 frozenlist-1.6.0 gevent-25.4.1 geventhttpclient-2.3.3 greenlet-3.2.1 grpcio-1.67.1 holoscan-3.1.0 holoscan-cli-3.1.0 idna-3.10 markdown-it-py-3.0.0 mdurl-0.1.2 monai-deploy-app-sdk-0.5.1+37.g96f7e31.dirty multidict-6.4.3 packaging-23.2 propcache-0.3.1 protobuf-5.29.4 psutil-6.1.1 pydantic-1.10.21 pygments-2.19.1 python-on-whales-0.60.1 python-rapidjson-1.20 pyyaml-6.0.2 requests-2.32.3 rich-14.0.0 shellingham-1.5.4 tqdm-4.67.1 tritonclient-2.56.0 typeguard-4.4.2 typer-0.15.2 urllib3-2.4.0 wheel-axle-runtime-0.0.6 yarl-1.20.0 zope.event-5.0 zope.interface-7.2\n",
      "#23 DONE 21.6s\n",
      "\n",
      "#24 [release 15/19] COPY ./models  /opt/holoscan/models\n",
      "#24 DONE 0.3s\n",
      "\n",
      "#25 [release 16/19] COPY ./map/app.json /etc/holoscan/app.json\n",
      "#25 DONE 0.1s\n",
      "\n",
      "#26 [release 17/19] COPY ./app.config /var/holoscan/app.yaml\n",
      "#26 DONE 0.1s\n",
      "\n",
      "#27 [release 18/19] COPY ./map/pkg.json /etc/holoscan/pkg.json\n",
      "#27 DONE 0.1s\n",
      "\n",
      "#28 [release 19/19] COPY ./app /opt/holoscan/app\n",
      "#28 DONE 0.1s\n",
      "\n",
      "#29 exporting to docker image format\n",
      "#29 exporting layers\n",
      "#29 exporting layers 175.3s done\n",
      "#29 exporting manifest sha256:f63297f6525a89f74b13e561b30821ab4985a18db4b815eb995ac1aed030557b 0.0s done\n",
      "#29 exporting config sha256:7266e968de607504eff9dfbb7c4e0c00adf190678ce9232099aab9c0d2d1cb24 0.0s done\n",
      "#29 sending tarball\n",
      "#29 ...\n",
      "\n",
      "#30 importing to docker\n",
      "#30 loading layer 481caafed616 251B / 251B\n",
      "#30 loading layer e39cf4d7d38e 65.54kB / 5.09MB\n",
      "#30 loading layer 3795307a2740 557.06kB / 3.20GB\n",
      "#30 loading layer 3795307a2740 130.91MB / 3.20GB 6.2s\n",
      "#30 loading layer 3795307a2740 278.53MB / 3.20GB 12.4s\n",
      "#30 loading layer 3795307a2740 483.52MB / 3.20GB 16.5s\n",
      "#30 loading layer 3795307a2740 677.94MB / 3.20GB 20.7s\n",
      "#30 loading layer 3795307a2740 851.18MB / 3.20GB 24.8s\n",
      "#30 loading layer 3795307a2740 1.06GB / 3.20GB 31.0s\n",
      "#30 loading layer 3795307a2740 1.25GB / 3.20GB 35.1s\n",
      "#30 loading layer 3795307a2740 1.48GB / 3.20GB 39.2s\n",
      "#30 loading layer 3795307a2740 1.70GB / 3.20GB 43.3s\n",
      "#30 loading layer 3795307a2740 1.91GB / 3.20GB 47.5s\n",
      "#30 loading layer 3795307a2740 2.07GB / 3.20GB 51.6s\n",
      "#30 loading layer 3795307a2740 2.17GB / 3.20GB 57.8s\n",
      "#30 loading layer 3795307a2740 2.25GB / 3.20GB 64.8s\n",
      "#30 loading layer 3795307a2740 2.49GB / 3.20GB 71.1s\n",
      "#30 loading layer 3795307a2740 2.70GB / 3.20GB 75.2s\n",
      "#30 loading layer 3795307a2740 2.88GB / 3.20GB 81.4s\n",
      "#30 loading layer 3795307a2740 3.05GB / 3.20GB 87.4s\n",
      "#30 loading layer 14bfd28d96ba 32.77kB / 144.30kB\n",
      "#30 loading layer 643060716c54 557.06kB / 398.53MB\n",
      "#30 loading layer 643060716c54 155.42MB / 398.53MB 2.1s\n",
      "#30 loading layer 643060716c54 223.38MB / 398.53MB 4.2s\n",
      "#30 loading layer 643060716c54 259.59MB / 398.53MB 6.2s\n",
      "#30 loading layer 643060716c54 338.13MB / 398.53MB 8.4s\n",
      "#30 loading layer 643060716c54 391.61MB / 398.53MB 10.5s\n",
      "#30 loading layer bccb4e460f68 196.61kB / 17.81MB\n",
      "#30 loading layer 61be03e60d84 492B / 492B\n",
      "#30 loading layer aeec4a674fef 315B / 315B\n",
      "#30 loading layer 54cba3cb0592 301B / 301B\n",
      "#30 loading layer 462f716907a1 3.91kB / 3.91kB\n",
      "#30 loading layer 462f716907a1 3.91kB / 3.91kB 0.4s done\n",
      "#30 loading layer 481caafed616 251B / 251B 105.8s done\n",
      "#30 loading layer e39cf4d7d38e 5.09MB / 5.09MB 105.7s done\n",
      "#30 loading layer 3795307a2740 3.20GB / 3.20GB 105.1s done\n",
      "#30 loading layer 14bfd28d96ba 144.30kB / 144.30kB 12.5s done\n",
      "#30 loading layer 643060716c54 398.53MB / 398.53MB 12.4s done\n",
      "#30 loading layer bccb4e460f68 17.81MB / 17.81MB 0.9s done\n",
      "#30 loading layer 61be03e60d84 492B / 492B 0.6s done\n",
      "#30 loading layer aeec4a674fef 315B / 315B 0.5s done\n",
      "#30 loading layer 54cba3cb0592 301B / 301B 0.4s done\n",
      "#30 DONE 105.8s\n",
      "\n",
      "#29 exporting to docker image format\n",
      "#29 sending tarball 135.1s done\n",
      "#29 DONE 310.5s\n",
      "\n",
      "#31 exporting cache to client directory\n",
      "#31 preparing build cache for export\n",
      "#31 writing layer sha256:0081cdb9958a9d50332b830133ae001192a5065ac4f0e3c095b3a1d5d5ff0265\n",
      "#31 writing layer sha256:0081cdb9958a9d50332b830133ae001192a5065ac4f0e3c095b3a1d5d5ff0265 0.0s done\n",
      "#31 writing layer sha256:1a0d52c93099897b518eb6cc6cd0fa3d52ff733e8606b4d8c92675ba9e7101ff done\n",
      "#31 writing layer sha256:234b866f57e0c5d555af2d87a1857a17ec4ac7e70d2dc6c31ff0a072a4607f24 done\n",
      "#31 writing layer sha256:255905badeaa82f032e1043580eed8b745c19cd4a2cb7183883ee5a30f851d6d done\n",
      "#31 writing layer sha256:287e630d01a5fdd05d03906401ef55472af7d087036f46dbc2bd8e3922500d1a 0.0s done\n",
      "#31 writing layer sha256:3713021b02770a720dea9b54c03d0ed83e03a2ef5dce2898c56a327fee9a8bca done\n",
      "#31 writing layer sha256:3a80776cdc9c9ef79bb38510849c9160f82462d346bf5a8bf29c811391b4e763 done\n",
      "#31 writing layer sha256:46c9c54348df10b0d7700bf932d5de7dc5bf9ab91e685db7086e29e381ff8e12 done\n",
      "#31 writing layer sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 done\n",
      "#31 writing layer sha256:5b90b93bdc8509aa597670e5542315cfcf5e462fd5f032cd9f20105de9574874\n",
      "#31 writing layer sha256:5b90b93bdc8509aa597670e5542315cfcf5e462fd5f032cd9f20105de9574874 50.6s done\n",
      "#31 writing layer sha256:60aea8801e5272305832cc3e60cd84c63f0d58d80a872b7357356957d261c574\n",
      "#31 writing layer sha256:60aea8801e5272305832cc3e60cd84c63f0d58d80a872b7357356957d261c574 0.0s done\n",
      "#31 writing layer sha256:67b3546b211deefd67122e680c0932886e0b3c6bd6ae0665e3ab25d2d9f0cda0 done\n",
      "#31 writing layer sha256:78f2accaffaf576042c7ebead20caa88db32984713ae7e35691a3be4f3301d0c\n",
      "#31 writing layer sha256:78f2accaffaf576042c7ebead20caa88db32984713ae7e35691a3be4f3301d0c 7.9s done\n",
      "#31 writing layer sha256:7f9be78d50c54946e6e71991e35dd38adb2967f404f207bcc854e528571f923c\n",
      "#31 writing layer sha256:7f9be78d50c54946e6e71991e35dd38adb2967f404f207bcc854e528571f923c 0.0s done\n",
      "#31 writing layer sha256:935b4cb3480886ca00a46c28cd98797870cfc7389818c85cd243869f4548fda4 done\n",
      "#31 writing layer sha256:95dbda2f5f8116a35367b28d397faae7d34bd4a713aefe01ccfe5e326b0b0250 done\n",
      "#31 writing layer sha256:980c13e156f90218b216bc6b0430472bbda71c0202804d350c0e16ef02075885 done\n",
      "#31 writing layer sha256:9ebe27a7cf7d039e6f4d4b82e9f34985c02f5dca091fa01f4585191f6facaec1 0.0s done\n",
      "#31 writing layer sha256:ac52600be001236a2c291a4c5902c915bf5ec9d2441c06d2a54c587b76345847 done\n",
      "#31 writing layer sha256:bc25d810fc1fd99656c1b07d422e88cdb896508730175bc3ec187b79f3787044 done\n",
      "#31 writing layer sha256:d0b9db5eaf93e490f07bab8abb1ac5475febcf822c25f2e1d1c82ff4273a7d0d done\n",
      "#31 writing layer sha256:d339273dfb7fc3b7fd896d3610d360ab9a09ab33a818093cb73b4be7639b6e99 done\n",
      "#31 writing layer sha256:da44fb0aa6d6f7c651c7eec8e11510c9c048b066b2ba36b261cefea12ff5ee3e done\n",
      "#31 writing layer sha256:dd250fa54efc49bc2c03cccb8d3a56ebf8ce96ad291d924e6ead2036c0d251da\n",
      "#31 writing layer sha256:dd250fa54efc49bc2c03cccb8d3a56ebf8ce96ad291d924e6ead2036c0d251da 0.4s done\n",
      "#31 writing layer sha256:dec17c052060552bd6c5810c57aa0195e7c9776da97eeb16984d2f31a35d816b\n",
      "#31 writing layer sha256:dec17c052060552bd6c5810c57aa0195e7c9776da97eeb16984d2f31a35d816b 0.0s done\n",
      "#31 writing layer sha256:e7cb8fb70ca3287e6c873a5263dfc4f8e333b6f965e6027a24a5f4b6fdc89a69 0.1s done\n",
      "#31 writing layer sha256:efc9014e2a4cb1e133b80bb4f047e9141e98685eb95b8d2471a8e35b86643e31\n",
      "#31 preparing build cache for export 59.4s done\n",
      "#31 writing layer sha256:efc9014e2a4cb1e133b80bb4f047e9141e98685eb95b8d2471a8e35b86643e31 done\n",
      "#31 writing layer sha256:f3af93a430a247328c59fb2228f6fa43a0ce742b03464db94acf7c45311e31cd done\n",
      "#31 writing config sha256:ae0dee53261b5588107aa98a4ac08135ba44a7551d5ee93e258b0bbe7f352c58 0.0s done\n",
      "#31 writing cache manifest sha256:c2e62667fa04d787de4d2de795baa75004e61b473a5ab4129713ad2d397c78b8 0.0s done\n",
      "#31 DONE 59.4s\n",
      "[2025-04-22 10:15:23,826] [INFO] (packager) - Build Summary:\n",
      "\n",
      "Platform: x64-workstation/dgpu\n",
      "    Status:     Succeeded\n",
      "    Docker Tag: my_app-x64-workstation-dgpu-linux-amd64:1.0\n",
      "    Tarball:    None\n"
     ]
    }
   ],
   "source": [
    "tag_prefix = \"my_app\"\n",
    "\n",
    "!monai-deploy package my_app -m {models_folder} -c my_app/app.yaml -t {tag_prefix}:1.0 --platform x86_64 -l DEBUG"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that the MAP Docker image is created.\n",
    "\n",
    "We can choose to display and inspect the MAP manifests by running the container with the `show` command, as well as extracting the manifests and other contents in the MAP by using the `extract` command, but not demonstrated in this example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "my_app-x64-workstation-dgpu-linux-amd64                                       1.0                            7266e968de60   6 minutes ago    9.07GB\n"
     ]
    }
   ],
   "source": [
    "!docker image ls | grep {tag_prefix}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Executing packaged app locally\n",
    "\n",
    "The packaged app can be run locally through [MONAI Application Runner](/developing_with_sdk/executing_packaged_app_locally)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "output\n",
      "dcm\n",
      "[2025-04-22 10:15:26,072] [INFO] (runner) - Checking dependencies...\n",
      "[2025-04-22 10:15:26,072] [INFO] (runner) - --> Verifying if \"docker\" is installed...\n",
      "\n",
      "[2025-04-22 10:15:26,074] [INFO] (runner) - --> Verifying if \"docker-buildx\" is installed...\n",
      "\n",
      "[2025-04-22 10:15:26,074] [INFO] (runner) - --> Verifying if \"my_app-x64-workstation-dgpu-linux-amd64:1.0\" is available...\n",
      "\n",
      "[2025-04-22 10:15:26,152] [INFO] (runner) - Reading HAP/MAP manifest...\n",
      "Successfully copied 2.56kB to /tmp/tmpxd644i6e/app.json\n",
      "Successfully copied 2.05kB to /tmp/tmpxd644i6e/pkg.json\n",
      "3e8dc45282382e26bf37bf8ab8bafbfe8a12c219cfb364cc1a38bc3c645bcbf8\n",
      "[2025-04-22 10:15:26,569] [INFO] (runner) - --> Verifying if \"nvidia-ctk\" is installed...\n",
      "\n",
      "[2025-04-22 10:15:26,579] [INFO] (runner) - --> Verifying \"nvidia-ctk\" version...\n",
      "\n",
      "[2025-04-22 10:15:27,009] [INFO] (common) - Launching container (81398038b14f) using image 'my_app-x64-workstation-dgpu-linux-amd64:1.0'...\n",
      "    container name:      youthful_pare\n",
      "    host name:           mingq-dt\n",
      "    network:             host\n",
      "    user:                1000:1000\n",
      "    ulimits:             memlock=-1:-1, stack=67108864:67108864\n",
      "    cap_add:             CAP_SYS_PTRACE\n",
      "    ipc mode:            host\n",
      "    shared memory size:  67108864\n",
      "    devices:             \n",
      "    group_add:           44\n",
      "2025-04-22 17:15:27 [INFO] Launching application python3 /opt/holoscan/app ...\n",
      "\n",
      "[info] [fragment.cpp:705] Loading extensions from configs...\n",
      "\n",
      "[info] [gxf_executor.cpp:265] Creating context\n",
      "\n",
      "[2025-04-22 17:15:33,511] [INFO] (root) - Parsed args: Namespace(log_level=None, input=None, output=None, model=None, workdir=None, triton_server_netloc=None, argv=['/opt/holoscan/app'])\n",
      "\n",
      "[2025-04-22 17:15:33,514] [INFO] (root) - AppContext object: AppContext(input_path=/var/holoscan/input, output_path=/var/holoscan/output, model_path=/opt/holoscan/models, workdir=/var/holoscan), triton_server_netloc=\n",
      "\n",
      "[2025-04-22 17:15:33,514] [INFO] (app.AISpleenSegApp) - App input and output path: /var/holoscan/input, /var/holoscan/output\n",
      "\n",
      "[info] [gxf_executor.cpp:2396] Activating Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2426] Running Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2428] Waiting for completion...\n",
      "\n",
      "[info] [greedy_scheduler.cpp:191] Scheduling 5 entities\n",
      "\n",
      "[2025-04-22 17:15:33,524] [INFO] (monai.deploy.operators.dicom_data_loader_operator.DICOMDataLoaderOperator) - No or invalid input path from the optional input port: None\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) - Finding series for Selection named: CT Series\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) - Searching study, : 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291\n",
      "\n",
      "  # of series: 1\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) - Working on series, instance UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) - On attribute: 'StudyDescription' to match value: '(.*?)'\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) -     Series attribute StudyDescription value: CT ABDOMEN W IV CONTRAST\n",
      "\n",
      "[2025-04-22 17:15:33,962] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - On attribute: 'Modality' to match value: '(?i)CT'\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) -     Series attribute Modality value: CT\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - On attribute: 'SeriesDescription' to match value: '(.*?)'\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) -     Series attribute SeriesDescription value: ABD/PANC 3.0 B31f\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Series attribute string value did not match. Try regEx.\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - On attribute: 'ImageType' to match value: ['PRIMARY', 'ORIGINAL']\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) -     Series attribute ImageType value: None\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Selected Series, UID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Series Selection finalized.\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Series Description of selected DICOM Series for inference: ABD/PANC 3.0 B31f\n",
      "\n",
      "[2025-04-22 17:15:33,963] [INFO] (root) - Series Instance UID of selected DICOM Series for inference: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239\n",
      "\n",
      "[2025-04-22 17:15:34,270] [INFO] (root) - Casting to float32\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Converted Image object metadata:\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.119403521930927333027265674239, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDate: 20090831, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesTime: 101721.452, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Modality: CT, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesDescription: ABD/PANC 3.0 B31f, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - PatientPosition: HFS, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - SeriesNumber: 8, type <class 'int'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_pixel_spacing: 0.7890625, type <class 'float'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_pixel_spacing: 1.5, type <class 'float'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - row_direction_cosine: [1.0, 0.0, 0.0], type <class 'list'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - col_direction_cosine: [0.0, 1.0, 0.0], type <class 'list'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - depth_direction_cosine: [0.0, 0.0, 1.0], type <class 'list'>\n",
      "\n",
      "[2025-04-22 17:15:34,406] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - dicom_affine_transform: [[   0.7890625    0.           0.        -197.60547  ]\n",
      "\n",
      " [   0.           0.7890625    0.        -398.60547  ]\n",
      "\n",
      " [   0.           0.           1.5       -383.       ]\n",
      "\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - nifti_affine_transform: [[  -0.7890625   -0.          -0.         197.60547  ]\n",
      "\n",
      " [  -0.          -0.7890625   -0.         398.60547  ]\n",
      "\n",
      " [   0.           0.           1.5       -383.       ]\n",
      "\n",
      " [   0.           0.           0.           1.       ]], type <class 'numpy.ndarray'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyInstanceUID: 1.3.6.1.4.1.14519.5.2.1.7085.2626.822645453932810382886582736291, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyID: , type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDate: 20090831, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyTime: 095948.599, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - StudyDescription: CT ABDOMEN W IV CONTRAST, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - AccessionNumber: 5471978513296937, type <class 'str'>\n",
      "\n",
      "[2025-04-22 17:15:34,407] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - selection_name: CT Series, type <class 'str'>\n",
      "\n",
      "2025-04-22 17:15:35,292 INFO image_writer.py:197 - writing: /var/holoscan/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626.nii\n",
      "\n",
      "[2025-04-22 17:15:37,261] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Input of <class 'monai.data.meta_tensor.MetaTensor'> shape: torch.Size([1, 1, 270, 270, 106])\n",
      "\n",
      "2025-04-22 17:15:39,032 INFO image_writer.py:197 - writing: /var/holoscan/output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626/1.3.6.1.4.1.14519.5.2.1.7085.2626_seg.nii\n",
      "\n",
      "[2025-04-22 17:15:40,582] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform length/batch size of output: 1\n",
      "\n",
      "[2025-04-22 17:15:40,587] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pixel spacings for pred: tensor([0.7891, 0.7891, 1.5000], dtype=torch.float64)\n",
      "\n",
      "[2025-04-22 17:15:40,719] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Post transform pred of <class 'numpy.ndarray'> shape: (1, 512, 512, 204)\n",
      "\n",
      "[2025-04-22 17:15:40,758] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image numpy array of type <class 'numpy.ndarray'> shape: (204, 512, 512)\n",
      "\n",
      "[2025-04-22 17:15:40,763] [INFO] (monai.deploy.operators.monai_seg_inference_operator.MonaiSegInferenceOperator) - Output Seg image pixel max value: 1\n",
      "\n",
      "/home/holoscan/.local/lib/python3.10/site-packages/highdicom/base.py:163: UserWarning: The string \"C3N-00198\" is unlikely to represent the intended person name since it contains only a single component. Construct a person name according to the format in described in https://dicom.nema.org/dicom/2013/output/chtml/part05/sect_6.2.html#sect_6.2.1.2, or, in pydicom 2.2.0 or later, use the pydicom.valuerep.PersonName.from_named_components() method to construct the person name correctly. If a single-component name is really intended, add a trailing caret character to disambiguate the name.\n",
      "\n",
      "  check_person_name(patient_name)\n",
      "\n",
      "[2025-04-22 17:15:41,997] [INFO] (highdicom.base) - copy Image-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 17:15:41,997] [INFO] (highdicom.base) - copy attributes of module \"Specimen\"\n",
      "\n",
      "[2025-04-22 17:15:41,997] [INFO] (highdicom.base) - copy Patient-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 17:15:41,997] [INFO] (highdicom.base) - copy attributes of module \"Patient\"\n",
      "\n",
      "[2025-04-22 17:15:41,998] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Subject\"\n",
      "\n",
      "[2025-04-22 17:15:41,998] [INFO] (highdicom.base) - copy Study-related attributes from dataset \"1.3.6.1.4.1.14519.5.2.1.7085.2626.936983343951485811186213470191\"\n",
      "\n",
      "[2025-04-22 17:15:41,998] [INFO] (highdicom.base) - copy attributes of module \"General Study\"\n",
      "\n",
      "[2025-04-22 17:15:41,998] [INFO] (highdicom.base) - copy attributes of module \"Patient Study\"\n",
      "\n",
      "[2025-04-22 17:15:41,999] [INFO] (highdicom.base) - copy attributes of module \"Clinical Trial Study\"\n",
      "\n",
      "[info] [greedy_scheduler.cpp:372] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.\n",
      "\n",
      "[info] [greedy_scheduler.cpp:401] Scheduler finished.\n",
      "\n",
      "[info] [gxf_executor.cpp:2431] Deactivating Graph...\n",
      "\n",
      "[info] [gxf_executor.cpp:2439] Graph execution finished.\n",
      "\n",
      "[2025-04-22 17:15:42,103] [INFO] (app.AISpleenSegApp) - End run\n",
      "\n",
      "[info] [gxf_executor.cpp:295] Destroying context\n",
      "\n",
      "[2025-04-22 10:15:44,346] [INFO] (common) - Container 'youthful_pare'(81398038b14f) exited.\n"
     ]
    }
   ],
   "source": [
    "# Clear the output folder and run the MAP. The input is expected to be a folder.\n",
    "!echo $HOLOSCAN_OUTPUT_PATH\n",
    "!echo $HOLOSCAN_INPUT_PATH\n",
    "!rm -rf $HOLOSCAN_OUTPUT_PATH\n",
    "!monai-deploy run -i $HOLOSCAN_INPUT_PATH -o $HOLOSCAN_OUTPUT_PATH my_app-x64-workstation-dgpu-linux-amd64:1.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "output:\n",
      "1.2.826.0.1.3680043.10.511.3.89222091780069825813597121405605044.dcm\n",
      "saved_images_folder\n",
      "\n",
      "output/saved_images_folder:\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626\n",
      "\n",
      "output/saved_images_folder/1.3.6.1.4.1.14519.5.2.1.7085.2626:\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626.nii\n",
      "1.3.6.1.4.1.14519.5.2.1.7085.2626_seg.nii\n"
     ]
    }
   ],
   "source": [
    "!ls -R $HOLOSCAN_OUTPUT_PATH"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
