diff --git "a/20240921/2405.17520v4.json" "b/20240921/2405.17520v4.json" new file mode 100644--- /dev/null +++ "b/20240921/2405.17520v4.json" @@ -0,0 +1,633 @@ +{ + "title": "Advancing Medical Image Segmentation with Mini-Net: A Lightweight Solution Tailored for Efficient Segmentation of Medical Images", + "abstract": "Accurate segmentation of anatomical structures and abnormalities in medical images is crucial for computer-aided diagnosis and analysis. While deep learning techniques excel at this task, their computational demands pose challenges. Additionally, some cutting-edge segmentation methods, though effective for general object segmentation, may not be optimised for medical images. We propose Mini-Net, a lightweight segmentation network specifically designed for medical images to address these issues. With fewer than 38,000 parameters, Mini-Net efficiently captures both high- and low-frequency features, enabling real-time applications in various medical imaging scenarios. We evaluate Mini-Net on various datasets, including DRIVE, STARE, ISIC-2016, ISIC-2018, and MoNuSeg, demonstrating its robustness and good performance compared to state-of-the-art methods.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Medical image segmentation represents a cutting-edge convergence of medical imaging and computer vision, with a focus on extracting meaningful insights from intricate medical images. The surge in imaging technologies such as magnetic resonance imaging (MRI), computed tomography (CT), and PET underscores the growing importance of accurately delineating and analysing anatomical structures or pathological regions within these images. This precision has become indispensable in clinical diagnosis, treatment planning, and medical research.\nAccurate segmentation of anatomical structures and abnormalities in medical images is essential for a precise diagnosis and optimal treatment planning [20 ###reference_b20###, 52 ###reference_b52###, 55 ###reference_b55###, 54 ###reference_b54###]. However, this task poses significant challenges, even for human experts, due to factors such as ambiguous structural boundaries, diverse textures, imbalanced intensity distribution, inherent uncertainty in segmented regions, contrast variations, and scarcity of annotated datasets. The urgency of automated segmentation techniques in medical imaging has spurred numerous research endeavours aimed at overcoming these challenges. For example, a fully convolutional multiscale residual network was proposed for segmentation of retinal vessels, using three multi-scale kernels to capture large, medium, and thin vessels[32 ###reference_b32###]. Segmentation of large and thin retinal vessels was addressed through a block matching mechanism and multiscale triple stick filtering[28 ###reference_b28###]. An improved ensemble block matching was also proposed to automate the detection of fine vessels in noisy fundus images[43 ###reference_b43###, 10 ###reference_b10###]. Existing segmentation techniques can be broadly categorised as supervised and unsupervised approaches. Supervised approaches involve learning from annotated training images provided in pairs (image, mask), whereas unsupervised methods lack annotation and rely on low-level features and ad-hoc rules, which limit their generalisability.\nSupervised deep learning-based techniques, particularly convolutional neural networks (CNN), have emerged as leaders in medical image segmentation[41 ###reference_b41###, 46 ###reference_b46###, 47 ###reference_b47###, 45 ###reference_b45###]. Despite the prowess of these models, there is a need for solutions tailored to resource-constrained devices. To meet this challenge, Khan et al. [29 ###reference_b29###] analysed image complexity to develop a macrolevel neural network for medical image segmentation. They use a variant of U-Net with a decreased number of filters and reduced depth of encoder blocks to minimise the model capacity and size. Iqbal et al. [22 ###reference_b22###] devised a small-scale neural network for the segmentation of retinal vessels, eliminating feature overlap to reduce computational redundancy. [27 ###reference_b27###] refines the receptive field using multiple kernels with different sizes to improve segmentation performance. [26 ###reference_b26###] utilises a multi-scale cascaded path to design a network with 1.3 million parameters for polyp segmentation.\nIn [25 ###reference_b25###], the authors present a feature enhancement segmentation network that alleviates the need for pre-training image enhancement, reducing associated computational overhead. The authors of [21 ###reference_b21###, 30 ###reference_b30###] and [18 ###reference_b18###] build networks with a restricted number of trainable parameters, tailored for devices with limited resources. Although MobileNet-V3 [18 ###reference_b18###] excels in object segmentation, it is not optimised for medical image segmentation. In this paper, we introduce a remarkably lightweight model, Mini-Net, explicitly designed for medical image segmentation that caters to devices with limited computing power. Key contributions of this work include the following:\nAn innovative simplified architectural design consisting of dual multi-residual block (DMRes) and Expand Sequaze blocks tailored for medical image segmentation, incorporating robust features selection.\nThe lightweight segmentation network (Mini-Net) is aided by a dual multi-residual block consisting of only parameters, which beats all existing works and is super fast and memory efficient compared to existing models.\nExtensive experiment conducted on multiple medical imaging datasets showed significant performance of the model, demonstrating state-of-the-art results." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Literature Review", + "text": "Medical image segmentation has attracted the attention of researchers due to increased health complications and increased diseases due to environmental changes and lifestyles of people. Accurate segmentation of medical images poses significant challenges due to factors such as ambiguous structural boundaries, diverse textures, imbalanced intensity distribution, inherent uncertainty in segmented regions, contrast variations, and scarcity of annotated datasets. We will further discuss how researchers have attempted to meet these challenges. Existing segmentation techniques can be broadly categorised as supervised and semi-supervised approaches. In this section, we will discuss various aspects of medical image segmentation applications devised by different deep learning and computer vision specialists over the years." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Supervised Deep Learning based Techniques", + "text": "Supervised deep learning-based techniques achieved the best results so far for the segmentation of images including medical images. There has been a notable improvement in neural network architectures for medical image segmentations in terms of model backbones, model building blocks, hyperparameters, and optimised loss functions. In semantic segmentation of medical images, we aim to classify every individual pixel in the image, and to achieve this, most researchers have proposed the encoder-decoder architecture that has been used in most of the current state-of-the-art techniques for segmentation such as U-Net [49 ###reference_b49###], generative adversarial networks (GANs) and numerous variants of U-Net. In encoder-decoder-based techniques, we have an encoder that extracts image features at various levels, and then the decoder blocks decipher the extracted features and restore the original image.\nThe journey of supervised learning-based segmentation begins with fully convolutional neural networks (FCN). FCN was initially introduced by adding fully connected layers at the end of convolutional neural networks to obtain probability information. This was only for image classification and not for pixel-level classification. SegNet [4 ###reference_b4###], introduced by Nakazawa et al., is designed for pixel-level classification of images (i.e. segmentation) and is built upon the FCN semantic segmentation task and has an encoder-decoder-based structure. The authors use VGG16 as the network encoder block to retrieve image features, and the decoder block uses these features to assign a colour label to each pixel in the image. While FCN upsamples the low-resolution features with deconvolution operations, SegNet upsamples them using a more extensive pooling index from the encoder instead of learning how to do so. In this way, SegNet creates dense features using trainable convolution kernels on sparse feature maps, and the softmax classifier categorises pixels after restoring the maps to their original resolution. Unpooling of the low-level features maintains high-frequency data, which helps to preserve image details. This process can contribute to better performance in tasks that require fine-grained information, such as edge detection. Despite the advantages that SegNet offers, it also comes with challenges and limitations such as requiring resources with large memory and high computational power, overfitting, shallow semantic understanding, unable to handle occlusions and object interactions, producing noisy and jagged boundaries for objects, and having limited generalisation capability. We will need to take further precautionary steps to overcome the limitations of SegNet.\nSegAN [62 ###reference_b62###], the adversarial segmentation network, is a U-Net-based network that uses adversarial learning for segmentation. The authors efficiently tackle the issue of class imbalance between pixel categories by alternatingly training a segmenter and a critic network in a Min-Max game and by using a multiscale L1 loss function. The multiscale L1 loss function helps capture both local and global features during training and consequently improves the segmentation performance of the network. Where adversarial learning and the multiscale L1 loss function improve the segmentation performance, they also come with enhanced complexity, making the network require more memory and computation power. This hampers the scalability of the model and its practical applicability in real time. The authors evaluate and discuss SegAN performance in BRATS2013 and BRATS2015 and do not discuss its applicability to any other medical datasets, nor do they say if the proposed methodology is generalisable in different medical applications.\nThe three-stage FCN [63 ###reference_b63###], proposed by Yan et al., focusses on accurately segmenting retinal vessels in medical images. It employs a multistage architecture to progressively refine segmentation results, with the aim of improving accuracy and reducing false positives and false negatives. Like other deep learning-based techniques previously discussed, the three-stage FCN is computationally complex and costly. This model requires a large dataset for training, which is not available in the case of medical images.\nThe \"BTS-DSN\" model proposed by Guo et al. [51 ###reference_b51###] aims to perform retinal vessel segmentation using a deep-supervised neural network with short connections. The model employs a deeply supervised learning approach, which involves adding auxiliary supervision signals at intermediate layers of the network, which helps facilitate gradient flow during training and can lead to more stable convergence and improved segmentation performance. Furthermore, BTS-DSN uses short connections within the neural network architecture, which can help propagate information across different layers more effectively, aiding in feature extraction and segmentation accuracy. The authors use DRIVE, CHASEDB1 and STARE datasets to evaluate the proposed method and use data augmentation to enlarge the datasets. They have used traditional augmentation techniques, including rotation, flipping, and scaling, but do not mention the scaling size and reason. They train the network with a learning rate of that is rarely practised with a very minor learning rate decay. They do not mention why they chose these hyperparameters. Although the most commonly used learning rate that has resulted very well is 1. The authors also use ResNet-101 as the backbone, which causes the model to have a large capacity and to be computationally complex and costly.\nU-Net revolutionizes conventional CNN networks\u2019 application in medical image segmentation by adopting symmetrical structure skip connections and displaying state-of-the-art performance in image segmentation tasks. This strategic design overcomes specific challenges posed by medical images, including noise and unclear boundaries, while efficiently integrating low-level and high-level image features essential for precise segmentation in medical tasks. As a result, the U-Net stands out as the premier choice for medical image segmentation, catalyzing numerous breakthroughs in the field. Given the volumetric medical data like CT and MRI images that are in 3D format, researchers have ventured into extending U-Net\u2019s capabilities to 3D data. \u00c7i\u00e7ek et al. [7 ###reference_b7###] started with the 3D U-Net, specifically tailored for handling 3D medical data. However, the 3D U-Net\u2019s restricted depth, owing to computational limitations, compromises its capacity to capture intricate features, thus constraining segmentation accuracy. In response to this challenge, Milletari et al. [42 ###reference_b42###] introduced the V-Net, a variant architecture integrating residual connections for deeper network structures. This innovation not only addresses issues like the vanishing gradient but also facilitates deeper architectures, thereby enhancing feature representation and segmentation performance.\nAfter the transformer\u2019s enormous success on language models and its remarkable performance in vision applications, researchers were interested in merging the power of U-Net with transformer and many transformer-based U-Net models such as Trans-UNet [6 ###reference_b6###], Swin-UNet [5 ###reference_b5###] and UNet++ with Vision Transformer were proposed. Whereas standard U-Net fails to capture global features effectively, transformer-based U-Net models address this limitation by replacing the convolutional layers with transformer blocks in the standard U-Net encoder. This self-attention mechanism helps the model to capture long-range dependencies efficiently, leading to overall improved segmentation performance.\nProposed by Zhou et al. U-Net++ [64 ###reference_b64###] aims to address some limitations of the standard U-Net model in capturing multi-scale contextual information efficiently. U-Net++ presents notable strengths in image segmentation with its ability to enhance accuracy through nested skip connections, capturing multi-scale contextual information, and deep supervision mechanisms, which facilitate learning features at various abstraction levels. This hierarchical feature learning capability enables the model to effectively segment complex structures in images. However, these advantages come with limitations. The increased computational complexity of U-Net++, stemming from its deeper architecture and dense connectivity, can pose challenges during both training and inference, potentially demanding substantial computational resources. Additionally, training U-Net++ requires more time and careful optimization due to its complexity, and there is a heightened risk of overfitting, especially with limited training data. Interpretability may also be compromised by the dense connectivity, and the model may require more memory resources during deployment, which could be problematic in resource-constrained environments like edge devices or real-time applications.\nThe improved performance of the different variations of U-Net is undeniable, yet they come with the challenges of increased computational complexity, excessive memory requirements, and high chances of overfitting as compared to standard U-Net. Besides these challenges, transformer-based U-Net models require vigilant optimization and tuning of hyperparameters because of their hybrid nature and large parameter space." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Semi-supervised Deep Learning based Techniques", + "text": "In this area of research, the goal is to efficiently address the challenge of limited annotated data by using both labelled and unlabelled medical images for the training of segmentation models. This approach specifically suits medical images as there is always a shortage of annotated dataset that is large enough for the application. Semi-supervised segmentation is a common scenario in medical applications where a small portion of the training images are annotated, while we also have a large unannotated portion that can be used to improve both the accuracy and generalisation capability of the model. Several algorithms and models have been proposed in this area to reduce the cost of labor-intensive, pixel level annotations of large medical images datasets.\nOne of the common ways to deal with limited annotated dataset is data augmentation and the most used augmentation technique is the traditional parametric transformation of images such as translation, scaling, shifting, rotation, horizontal and vertical flips, etc. In addition to the traditional augmentation technique, researchers have also used conditional generative adversarial networks (cGANs) for the augmentation and synthesis of medical images. Several works, including [50 ###reference_b50###, 23 ###reference_b23###] have used these augmentation techniques to enlarge the dataset and improve the model performance. The authors in [50 ###reference_b50###], introduce a way to synthesise medical images using GANs that can help anonymise sensitive medical data. However, the quality of the synthesised images is questionable, since GANs can struggle to generate images with the level of detail and fidelity required for medical applications. The paper does not provide sufficient evaluation and validation of the method on clinical datasets, making it difficult to assess the performance of the proposed method in capturing accurate anatomy and pathology. Although [23 ###reference_b23###] produces synthetic data that closely resembles real-world CT scans, facilitating more realistic and clinically relevant evaluations of lung segmentation algorithms, they also fail to adequately address the realism and fidelity of synthetic nodules compared to real-world CT scans. Because cGANs generate images with blurred boundaries and low resolution, researchers have used CycleGAN to improve the quality of the synthesised images.\nAnother efficient way to deal with limited annotated data using semi-supervised learning is the transfer learning mechanism. In this setting, the trained and learnt weights of a pre-trained network are used to fine-tune a network on a new set of data with limited number of annotated and labelled samples. Researchers discovered that using pre-trained networks on natural images as an encoder for the U-Net like model and fine-tuning it on medical images improves the performance of the model for segmentation as well as classification tasks." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Lightweight Medical Image Segmentation Models", + "text": "Following the success of lightweight models like MobileNet [18 ###reference_b18###] in general object segmentation, there has been growing interest among researchers in designing efficient, lightweight networks for medical image segmentation. The main focus has been to minimize network size and capacity, reduce the computational burden, and lower memory requirements. Iqbal et al. [21 ###reference_b21###] introduced LDMRes-Net, a compact and efficient model built using dual multiscale residual blocks, which integrate a multiscale feature extraction mechanism. This allows the network to capture details at various granular levels, while also reducing the number of parameters and computational complexity compared to traditional deep learning models. The use of depth-wise separable convolutions further enhances the efficiency of LDMRes-Net, with residual connections ensuring that performance remains strong. Similarly, Khan et al. [30 ###reference_b30###] proposed a lightweight network tailored for medical image segmentation, focusing on the capture of high-frequency features crucial for such tasks. Their model incorporates expand-and-squeeze blocks, which increase computational efficiency and robustness, making it suitable for deployment on devices with limited processing power. Li et al. [36 ###reference_b36###] introduced a lightweight version of U-Net for lesion segmentation in ultrasound images. This model balances computational efficiency and accuracy, making it a strong choice for applications where resources are constrained. An additional example comes from Ma et al. [38 ###reference_b38###], who proposed ShuffleNet V2, a lightweight network known for its superior performance in mobile and embedded device scenarios. By employing a channel split operation, ShuffleNet V2 achieves an optimal balance between speed and accuracy, making it well-suited for tasks involving limited computational power.\nDespite these advances, there has been limited work on the development of lightweight models for medical image segmentation that works fine with general medical images. In this paper, we aim to address this gap by proposing a lightweight model for segmentation of medical images including retinal vessels, skin lesion and multi-organ nuclei, while maintaining state-of-the-art performance. This model will be optimized to work effectively on devices with limited computational resources, making it a valuable contribution to the field of medical image analysis.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "We introduce Mini-Net, which is designed as a lightweight encoder-decoder model specifically crafted for the segmentation of medical images. Central to its architecture is the integration of a dual multiresidual block (DMRes) and an Expand Squeeze block, inspired by recent advances in feature extraction and regularisation techniques [21 ###reference_b21###] and [30 ###reference_b30###]. Mini-Net aims to strike a balance between capturing high-level semantic features and preserving fine-grained details inherent in medical imaging data. This balance is crucial for accurate segmentation, particularly in tasks involving anatomical structures or pathological regions." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Mini-Net Architecture", + "text": "The architecture of Mini-Net is characterised by an encoder-decoder framework, with the DMRes block serving as its central component. Unlike traditional encoder-decoder models, Mini-Net places special emphasis on efficient feature extraction, achieved through the integration of DMRes blocks within the encoder pathway. These blocks facilitate multiscale feature extraction and refinement, enabling the model to capture both global context and local details present in the input images. This feature is particularly beneficial in medical imaging, where precise delineation of structures is paramount.\nFigure 1 ###reference_### shows the diagram of the Mini-Net model. The input of the model denoted , is represented as a three-dimensional tensor with dimensions , where represents the number of channels and and denote the height and width of the input image, respectively. The operation denotes a convolution operation with a kernel size of , and represents batch normalisation.\nThe initial feature map, denoted , is obtained by processing the input image through a convolution operation followed by batch normalisation, as expressed in Equation 1 ###reference_###:\nThe feature map, , is then fed as input to the first encoder block. Each encoder block has a DMRes block followed by a strided convolution operation. So, is fed into the DMRes block where multi-scale feature extraction and feature refinement are performed. The is the output of the DMRes block given in (Eq. 4 ###reference_###), where .\nand are the intermediate outputs of the addition layers of the dual multiscale residual block and are calculated as (Eqs. 3 ###reference_###-2 ###reference_###). We have used convolution operations with kernel sizes and to obtain features on multiple scales and then added residual connections to maintain high-frequency features. Now that we have feature maps, , achieved from the DMRes block, we feed it into the strided convolutional layer of the encoder block, , where is the kernel size, for downsampling of the feature maps as computed in (Eq.5 ###reference_###).\nHere , where we take as 8, is the output of the first encoder block which is fed as the input to the second encoder block where the same sequence of steps is followed as outlined in (Eq. 2 ###reference_###-4 ###reference_###). The output of the second DMRes block, , is further fed into the second decoder that generates . This value is then directed to the bottleneck block, which comprises a single DMRes block that yields the final output of the encoder blocks, . This output is now ready to be fed into the first decoder block. It is essential to note that in the bottleneck, we solely refine the feature maps while maintaining the same spatial dimensions as .\nOur decoder blocks mirror the architecture of the encoder blocks, initiating with deconvolution operations for up-sampling, succeeded by DMRes blocks. The initial decoder begins with a deconvolution layer, as delineated in (Eq. 6 ###reference_###). Subsequently, the output of the decoder blocks is calculated according to the formulations in (Eq.7 ###reference_###-9 ###reference_###), where denotes a deconvolution operation with a kernel size of .\nThe features , obtained from the first decoder block, are fed into the deconvolution layer of the second decoder which, in turn, is fed to the DMRes block of the second decoder. For this purpose, the equations (Eqs.6 ###reference_###-9 ###reference_###) are repeated, and we receive .\nNow we evaluate the output, as given in (Eq. 10 ###reference_###).\nThe feature map obtained, , is processed through the dice-pixel classification layer to obtain the final binary segmentation mask, as in (Eq. 11 ###reference_###).\nIn the dual multi-residual (DMRes) blocks, we use kernels of different sizes to simultaneously capture features at varying scales on every level. This approach ensures that each feature map generated by the encoder blocks represents multi-scale features, including both high and low-frequency components. As a result, the detailed feature maps contribute to more accurate delineation of various anatomical structures. Within the DMRes blocks, we incorporate expand and squeeze blocks to accelerate convolutional operations and minimize the overall number of computations. This integration significantly enhances the model\u2019s ability to capture features at multiple scales, enabling Mini-Net to focus on both high and low-frequency features simultaneously. Additionally, the use of expand and squeeze blocks effectively reduces computational redundancy, making Mini-Net computationally efficient." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Loss Function", + "text": "We tried a bunch of popular loss functions that have shown promising performance in existing solutions for medical image segmentation tasks. Such as Dice coefficient loss given in equation 12 ###reference_###, jaccard coefficient loss given in Eq.13 ###reference_###, binary cross-entropy loss given in Eq.14 ###reference_### and different combinations of these losses and alpha weighted loss as given in Eq.15 ###reference_###. In all these equations, represents the ground truth and represents the model prediction.\nThe dice coefficient loss is a metric used to evaluate the overlap between the ground truth and predicted segments, particularly in image segmentation tasks. This loss function is favored for its effectiveness in addressing pixel-wise class imbalance between foreground and background regions. The dice loss can be computed as follows:\nThe Jaccard coefficient loss function, also known as the Intersection over Union (IoU) loss, has several strengths that make it a valuable choice for various machine learning tasks, particularly in image segmentation. Its strengths include robustness to class imbalance, sensitivity to object shape and boundary and direct interpretation to assess the segmentation quality. Jaccard coefficient loss can be calculated as:\nBinary cross entroy is used to measure the difference between the ground truth and the predicted binary labels. We use it in a combination with jaccard and dice loss to make the model accountable for every mislabeled pixel in the segmentation map. Binary Cross Entropy Loss:\nIn addition to using a combination of these popular loss functions we use a dynamic weighting mechanism for the loss functions. A dynamically weighted loss function aims to enhance the learning process by adjusting the loss function with a weight value that corresponds to the learning error of each data instance. The goal is to direct deep learning models to pay more attention to instances with larger errors, thereby improving overall performance. Alpha Weighted Loss:\nAfter an extensive set of experiments on different loss functions we found out that a combination of dice coefficient loss, jaccard coefficeint and binary cross entropy loss with alpha-weighted setup gave us best segmentation results. This lead to our final loss function as:" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments and Implementation Details", + "text": "We conducted a comprehensive evaluation of our model, assessing its performance against the state-of-the-art using diverse datasets. The experiments involved datasets of retinal vessels, including DRIVE [53 ###reference_b53###], STARE [17 ###reference_b17###], and CHASEDB1[13 ###reference_b13###], as well as datasets of skin lesions such as ISIC 2016 [16 ###reference_b16###] and ISIC 2018 [8 ###reference_b8###], and the MonuSeg [33 ###reference_b33###] dataset. You can refer to Table 1 ###reference_### for specific details on these datasets, including train and test splits. All experiments were executed on a GeForce RTX 3090 GPU. For consistency between datasets, we trained Mini-Net for 100 epochs, leveraging Adam optimiser, an alpha-weighted jaccard coefficient loss function combined binary cross entropy loss given in Eq. 16 ###reference_###, and an initial learning rate set at . The utilisation of the alpha-scheduler in conjunction with the objective function proved instrumental in expediting convergence to the minima, reducing unnecessary computations, and enhancing overall training effectiveness. To enhance the efficiency of the training, we employ an early stopping approach with a patience of 4. The choice of image size and batch size varied according to each dataset\u2019s specifications, ensuring compatibility with both the dataset requirements and GPU memory limitations.\nIn the context of medical image segmentation, the efficacy of lighter models with fewer parameters is evident, given the inherent limitation of available datasets in the medical imaging domain. The prevalence of limited datasets makes lighter models particularly advantageous, as larger capacity models are prone to overfitting. In our approach, we start with image processing with 8 channels, gradually progressing to a maximum of 32 channels. The architectural design of our model encompasses a total of 37,685 parameters, and 36,657 are trainable. This intentional restraint in the number of parameters is a strategic choice, aligning with the need for a balanced model capacity that avoids overfitting issues commonly associated with larger models.\nMethod\nPerformance (%)\n\n\nISIC 2018\n\nISIC 2016\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nU-Net [49 ###reference_b49###]\n80.09\n86.64\n92.52\n85.22\n92.09\n\n81.38\n88.24\n93.31\n87.28\n92.88\n\nUNet++ [64 ###reference_b64###]\n81.62\n87.32\n93.72\n88.70\n93.96\n\n82.81\n89.19\n93.88\n88.78\n93.52\n\nBCDU-Net [3 ###reference_b3###]\n81.10\n85.10\n93.70\n78.50\n98.20\n\n83.43\n80.95\n91.78\n78.11\n96.20\n\nCPFNet [12 ###reference_b12###]\n79.88\n87.69\n94.96\n89.53\n96.55\n\n83.81\n90.23\n95.09\n92.11\n95.91\n\nDAGAN [35 ###reference_b35###]\n81.13\n88.07\n93.24\n90.72\n95.88\n\n84.42\n90.85\n95.82\n92.28\n95.68\n\nFAT-Net [58 ###reference_b58###]\n82.02\n89.03\n95.78\n91.00\n96.99\n\n85.30\n91.59\n96.04\n92.59\n96.02\n\nAS-Net [19 ###reference_b19###]\n83.09\n89.55\n95.68\n93.06\n94.69\n\n-\n-\n-\n-\n-\n\nSLT-Net [11 ###reference_b11###]\n71.51\n82.85\n-\n78.85\n99.35\n\n-\n-\n-\n-\n-\n\nMs RED [9 ###reference_b9###]\n83.86\n90.33\n96.45\n91.10\n-\n\n87.03\n92.66\n96.42\n-\n-\n\nARU-GD [39 ###reference_b39###]\n84.55\n89.16\n94.23\n91.42\n96.81\n\n85.12\n90.83\n94.38\n89.86\n94.65\n\nSwin-Unet [5 ###reference_b5###]\n82.79\n88.98\n96.83\n90.10\n97.16\n\n87.60\n88.94\n96.00\n92.27\n95.79\n\nMini-Net\n89.82\n94.47\n96.89\n94.22\n97.78\n\n87.17\n92.45\n96.60\n92.51\n95.34" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Results and Discussion", + "text": "The exceptional performance of Mini-Net, despite its lightweight architecture, underscores its potential for broad applicability across different medical imaging modalities. The performance metrics detailed in Tables 2 ###reference_###, 3 ###reference_###, 4 ###reference_###, and 5 ###reference_### consistently demonstrate Mini-Net\u2019s ability to achieve or exceed state-of-the-art results, reinforcing its robustness and efficiency.\nIn the context of the DRIVE dataset, as shown in Table 4 ###reference_###, Mini-Net not only achieved the highest sensitivity and score among lightweight models, but also maintained competitive accuracy, proving that it does not compromise performance despite its minimal parameter count. This balance between model size and performance is crucial in medical settings where computational resources are limited. It is worth mentioning that specificity of a model demonstrates the model\u015b capability to identify background pixels while sensitivity demonstrates how well a model can identify foreground pixels which are actually the pixels we are interested in. Since there is a class imbalance in terms of pixel counts in medical images such that the number of background pixels are very much larger than the number of foreground pixels, it is very common for a model to show high specificity and low sensitivity. Hence, majority of the existing works have higher specificity and comparatively lower sensitivity. Nevertheless, Mini-Net displays a reasonable balance between the two metrics and is accurate enough in identifying the foreground pixels. This is because Mini-Net focuses on both the high frequency and low frequency features equally and the customized loss function makes the model capture foreground pixels accurately and learn the edges and borders more efficiently.\nFor the ISIC 2016 and 2018 datasets, Mini-Net\u2019s performance, as shown in Table 3 ###reference_###, was exemplary, particularly in handling high variability in image resolution and lesion appearance. This versatility is pivotal for models aimed at dermatological applications, where the morphology of the lesion can vary greatly, making consistent segmentation a challenging task. Just like on other datasets, the existing models show biased performance on skin-lesion datasets, too. The class imbalance in the dataset clearly impacts the model performance but Mini-Net again shows consistent strength in identifying the foreground pixels as well the background pixels efficiently.\nFurthermore, the superior results on the CHASEDB1dataset, detailed in Table 5 ###reference_###, highlight Mini-Net\u2019s proficiency in segmenting fine details such as retinal vessels, which are critical for accurate diagnostic and treatment procedures in ophthalmology. The model\u2019s ability to finely delineate these tiny structures, often with better clarity than heavier models, could be particularly beneficial in enhancing the precision of retinal disease diagnoses.\nThese results collectively suggest that Mini-Net, with its innovative architecture, sets a new benchmark for lightweight models in medical image segmentation. Its impressive performance across diverse datasets indicates strong generalisability, making it a suitable choice for various real-time medical applications.\n###figure_2###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "We tried a variety of popular loss functions such as jaccard loss, binary cross-entropy loss, dice loss, a combination of these losses and an alpha-weighted version of the loss functions. As a result of extensive experiments and ablation study on loss functions, we chose the alpha-weighted sum of dice coefficient loss, binary cross-entropy and jaccard coefficient loss function. Table 6 ###reference_### shows the performance of our model on the ISIC-2018 dataset against different loss functions. We get the best results on the ISIC-2018 dataset with alpha-weighted sum of dice coefficient loss, binary cross entropy and Jaccard coefficient loss function which is given in Eq.16 ###reference_###. Whereas the alpha-weighted binary cross-entropy jaccard loss function performs well with skin lesion and MonuSeg datasets, we achieved better results on retinal vessel datasets with the alpha-weighted binary cross-entropy dice loss function. It is because jaccard coefficient is more robust on object shape and boundaries than the dice coefficient loss function while these both can well handle the class imbalance between foreground and background pixels in terms of pixel count. The alpha-weighted combination of the losses work well on skin lesion and retinal vessels datasets." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "This paper responds to the pressing need for machine learning models that can perform real-time segmentation of medical images. In addressing this need, we introduce Mini-Net, a model defined by its exceptionally lightweight framework, which is meticulously designed to support real-time segmentation tasks. Mini-Net stands out by achieving state-of-the-art results on a variety of medical image datasets, showcasing not only its effectiveness, but also its superior efficiency. With its compact design, which consists of only 37,800 parameters, Mini-Net works effectively on devices with limited memory and processing power, making it ideal for real-time medical applications.\nThe development of Mini-Net represents a significant advancement in the field of medical imaging, offering a solution that balances efficiency with performance. This balance is crucial for the deployment of advanced technologies in real-time settings, especially in environments where computational resources are scarce. Our comprehensive experiments across multiple datasets further highlight the model\u2019s robust generalizability, confirming its capability to handle diverse medical imaging tasks effectively. This demonstrates Mini-Net\u2019s potential as a transformative tool in medical diagnostics, contributing significantly to the evolution of healthcare technologies." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Datasets used in the study.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ApplicationDatasetImage ResolutionTotalTraining/Test Split
Retinal VesselsDRIVE [53]\n58456540Train: 20, Test: 20
Retinal VesselsCHASEDB1 [13]\n99996028Train: 20, Test: 8
Skin LesionsISIC 2016 [16]\n679453\u20136,7484,4991,279Train: 900, Test: 379
Skin LesionsISIC 2018 [8]\n679453\u20136,7484,4992,750Train: 2,000, Test: 600
Cell NucleiMoNuSeg [33]\n1,0001,000 pixels44Train: 30, Test: 14
\n
", + "capture": "Table 1: Datasets used in the study." + }, + "2": { + "table_html": "
\n
Table 2: Comparison with state of the art results on the MoNuSeg [33] dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodJParams (M)
U-Net [49]\n0.68400.819015.56
UNet++ [64]\n0.68300.811018.27
BiO-Net [61]\n0.70400.824015
Swin-Unet [5]\n0.63770.776982.3
UCTransNet [56]\n0.66680.798765.6
Proposed Mini-Net (lightweight)0.70560.82690.04
\n
", + "capture": "Table 2: Comparison with state of the art results on the MoNuSeg [33] dataset." + }, + "3": { + "table_html": "
\n
Table 3: Performance comparison of Mini-Net with various SOTA methods on the skin lesion segmentation datasets ISIC 2018 [8], and ISIC 2016 [16].
\n
\n

\n\n\n\n\n\nMethod\nPerformance (%)\n\n\nISIC 2018\n\nISIC 2016\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nU-Net [49 ###reference_b49###]\n80.09\n86.64\n92.52\n85.22\n92.09\n\n81.38\n88.24\n93.31\n87.28\n92.88\n\nUNet++ [64 ###reference_b64###]\n81.62\n87.32\n93.72\n88.70\n93.96\n\n82.81\n89.19\n93.88\n88.78\n93.52\n\nBCDU-Net [3 ###reference_b3###]\n81.10\n85.10\n93.70\n78.50\n98.20\n\n83.43\n80.95\n91.78\n78.11\n96.20\n\nCPFNet [12 ###reference_b12###]\n79.88\n87.69\n94.96\n89.53\n96.55\n\n83.81\n90.23\n95.09\n92.11\n95.91\n\nDAGAN [35 ###reference_b35###]\n81.13\n88.07\n93.24\n90.72\n95.88\n\n84.42\n90.85\n95.82\n92.28\n95.68\n\nFAT-Net [58 ###reference_b58###]\n82.02\n89.03\n95.78\n91.00\n96.99\n\n85.30\n91.59\n96.04\n92.59\n96.02\n\nAS-Net [19 ###reference_b19###]\n83.09\n89.55\n95.68\n93.06\n94.69\n\n-\n-\n-\n-\n-\n\nSLT-Net [11 ###reference_b11###]\n71.51\n82.85\n-\n78.85\n99.35\n\n-\n-\n-\n-\n-\n\nMs RED [9 ###reference_b9###]\n83.86\n90.33\n96.45\n91.10\n-\n\n87.03\n92.66\n96.42\n-\n-\n\nARU-GD [39 ###reference_b39###]\n84.55\n89.16\n94.23\n91.42\n96.81\n\n85.12\n90.83\n94.38\n89.86\n94.65\n\nSwin-Unet [5 ###reference_b5###]\n82.79\n88.98\n96.83\n90.10\n97.16\n\n87.60\n88.94\n96.00\n92.27\n95.79\n\nMini-Net\n89.82\n94.47\n96.89\n94.22\n97.78\n\n87.17\n92.45\n96.60\n92.51\n95.34\n\n\n

\n
\n
", + "capture": "Table 3: Performance comparison of Mini-Net with various SOTA methods on the skin lesion segmentation datasets ISIC 2018 [8], and ISIC 2016 [16]." + }, + "4": { + "table_html": "
\n
Table 4: Comparison of Mini-Net and other existing works on the DRIVE dataset [53]. Best results are in bold, and dashes indicate unknown results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSeSpAParams (M)
SegNet [4]\n0.79490.97380.95790.818228.40
Three-Stage FCN [63]\n0.76310.98200.9538-20.40
Image BTS-DSN [51]\n0.78000.98060.95510.82087.80
VessNet [2]\n0.80220.98100.9655-9
DRIU [40]\n0.78550.97990.95520.82207.80
Patch BTS-DSN [51]\n0.78910.98040.95610.82497.8
DPN [14]\n0.79340.98100.95710.8183.40
MobileNet-V3 [18] (Lightweight)0.82500.97710.93710.65752.50
ERFNet [48] (Lightweight)--0.95980.76522.06
M2U-Net [34] (Lightweight)--0.96300.80910.55
Vessel-Net [60] (Lightweight)0.80380.98020.9578-1.70
MS-NFN [59] (Lightweight)0.78440.98190.9567-0.40
FCN [1] (Lightweight)0.80390.98040.9576-0.20
T-Net [31] (Lightweight)0.82620.98620.96970.82690.03
ESDMR-Net (Lightweight)[30]\n0.83200.98320.96990.82870.70
Proposed Mini-Net(Lightweight)0.83700.97780.95980.84120.04
\n
", + "capture": "Table 4: Comparison of Mini-Net and other existing works on the DRIVE dataset [53]. Best results are in bold, and dashes indicate unknown results." + }, + "5": { + "table_html": "
\n
Table 5: Performance comparison between Mini-Net and several alternative methods on CHASEDB1 dataset [13].
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPerformance Measures in (%)
SeSpAccAUCF1
SegNet [4]\n78.9397.9296.1198.3579.01
UNet++ [64]\n81.3398.0996.1097.8182.03
Att UNet [44]\n80.1098.0496.4298.4080.12
BCD-Unet [3]\n79.4198.0696.0797.7680.22
BTS-DSN [15]\n78.8898.0196.2798.4079.83
DUNet [24]\n77.3598.0196.1898.3979.32
OCE-Net [57]\n81.3898.2496.7898.7281.96
Wave-Net [37]\n82.8398.2196.64-83.49
MultiResNet [39]\n83.2298.4897.0698.2283.08
G-Net Light [22]\n82.1098.3897.2698.2280.48
Proposed Mini-Net83.2898.4397.3898.7881.94
\n
", + "capture": "Table 5: Performance comparison between Mini-Net and several alternative methods on CHASEDB1 dataset [13]." + }, + "6": { + "table_html": "
\n
Table 6: Performance of model with different loss functions on ISIC-2018 dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Loss Function
Dice Loss0.87870.93070.96230.93360.9608
Jacc. Loss0.86710.92540.95820.91830.9634
BCE + Dice0.87760.92940.96220.93020.9611
Alpha(BCE+Dice)0.87240.92660.96080.92870.9602
Alpha(Jacc.)0.86310.92230.95650.92180.9583
Alpha(BCE+Jacc.)0.88140.93400.96330.93260.9631
Alpha(Dice+BCE+Jacc.)0.89820.94470.96890.94220.9778
\n
", + "capture": "Table 6: Performance of model with different loss functions on ISIC-2018 dataset." + } + }, + "image_paths": { + "1": { + "figure_path": "2405.17520v4_figure_1.png", + "caption": "Figure 1: Mini-Net Model Diagram", + "url": "http://arxiv.org/html/2405.17520v4/x1.png" + }, + "2": { + "figure_path": "2405.17520v4_figure_2.png", + "caption": "Figure 2: Qualitative results of Mini-Net on sample images from (a) MonuSeg, (b) CHASE, and (c) ISIC-2018 datasets. The columns from left to right in each block represent query image, ground truth mask, and the predicted mask by Mini-Net respectively. The green and black pixels are the correctly segmented foreground and background respectively while blue pixels are the false positives and the red ones are the false negative pixels.", + "url": "http://arxiv.org/html/2405.17520v4/x2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Retinal vessel segmentation based on fully convolutional neural networks.", + "author": "O. Am\u00e9rico, P. S\u00e9rgio, and A. S. Carlos.", + "venue": "Expert Systems with Applications, 112:229 \u2013 242, 2018.", + "url": null + } + }, + { + "2": { + "title": "Aiding the diagnosis of diabetic and hypertensive retinopathy using artificial intelligence-based semantic segmentation.", + "author": "Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho, and Kang Ryoung Park.", + "venue": "Journal of Clinical Medicine, 8(9):1\u201328, 2019.", + "url": null + } + }, + { + "3": { + "title": "Bi-directional convlstm u-net with densley connected convolutions.", + "author": "Reza Azad, Maryam Asadi-Aghbolaghi, Mahmood Fathy, and Sergio Escalera.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision workshops, pages 0\u20130, 2019.", + "url": null + } + }, + { + "4": { + "title": "Segnet: A deep convolutional encoder-decoder architecture for image segmentation.", + "author": "V. Badrinarayanan, A. Kendall, and R. Cipolla.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481\u20132495, 2017.", + "url": null + } + }, + { + "5": { + "title": "Swin-Unet: Unet-like pure transformer for medical image segmentation.", + "author": "Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang.", + "venue": "In European Conference on Computer Vision (ECCV) Workshops, pages 205\u2013218, 2023.", + "url": null + } + }, + { + "6": { + "title": "TransUNet: Transformers make strong encoders for medical image segmentation.", + "author": "Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou.", + "venue": "arXiv:2102.04306, 2021.", + "url": null + } + }, + { + "7": { + "title": "3d u-net: learning dense volumetric segmentation from sparse annotation.", + "author": "\u00d6zg\u00fcn \u00c7i\u00e7ek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger.", + "venue": "In Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, pages 424\u2013432. Springer, 2016.", + "url": null + } + }, + { + "8": { + "title": "Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the International Skin Imaging Collaboration (ISIC).", + "author": "Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, and Allan Halpern.", + "venue": "arXiv:1902.03368, 2019.", + "url": null + } + }, + { + "9": { + "title": "Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation.", + "author": "Duwei Dai, Caixia Dong, Songhua Xu, Qingsen Yan, Zongfang Li, Chunyan Zhang, and Nana Luo.", + "venue": "Medical Image Analysis, 75:102293, 2022.", + "url": null + } + }, + { + "10": { + "title": "Airogs: artificial intelligence for robust glaucoma screening challenge.", + "author": "Coen De Vente, Koenraad A Vermeer, Nicolas Jaccard, He Wang, Hongyi Sun, Firas Khader, Daniel Truhn, Temirgali Aimyshev, Yerkebulan Zhanibekuly, Tien-Dung Le, et al.", + "venue": "IEEE transactions on medical imaging, 43(1):542\u2013557, 2023.", + "url": null + } + }, + { + "11": { + "title": "SLT-Net: A codec network for skin lesion segmentation.", + "author": "Kaili Feng, Lili Ren, Guanglei Wang, Hongrui Wang, and Yan Li.", + "venue": "Computers in Biology and Medicine, 148:105942, 2022.", + "url": null + } + }, + { + "12": { + "title": "CPFNet: Context pyramid fusion network for medical image segmentation.", + "author": "Shuanglang Feng, Heming Zhao, Fei Shi, Xuena Cheng, Meng Wang, Yuhui Ma, Dehui Xiang, Weifang Zhu, and Xinjian Chen.", + "venue": "IEEE Transactions on Medical Imaging, 39(10):3008\u20133018, 2020.", + "url": null + } + }, + { + "13": { + "title": "An ensemble classification-based approach applied to retinal blood vessel segmentation.", + "author": "Muhammad Moazam Fraz, Paolo Remagnino, Andreas Hoppe, Bunyarit Uyyanonvara, Alicja R. Rudnicka, Christopher G. Owen, and Sarah A. Barman.", + "venue": "IEEE Transactions on Biomedical Engineering, 59(9):2538\u20132548, 2012.", + "url": null + } + }, + { + "14": { + "title": "DPN: Detail-preserving network with high resolution representation for efficient segmentation of retinal vessels.", + "author": "Song Guo.", + "venue": "Journal of Ambient Intelligence and Humanized Computing (2021), 2021.", + "url": null + } + }, + { + "15": { + "title": "Bts-dsn: Deeply supervised neural network with short connections for retinal vessel segmentation.", + "author": "Song Guo, Kai Wang, Hong Kang, Yujun Zhang, Yingqi Gao, and Tao Li.", + "venue": "International Journal of Medical Informatics, 126:105 \u2013 113, 2019.", + "url": null + } + }, + { + "16": { + "title": "Skin lesion analysis toward melanoma detection: A challenge at the International Symposium on Biomedical Imaging (ISBI) 2016 hosted by the International Skin Imaging Collaboration (ISIC).", + "author": "David Gutman, Noel CF Codella, Emre Celebi, Brian Helba, Michael Marchetti, Nabin Mishra, and Allan Halpern.", + "venue": "arXiv:1605.01397, 2016.", + "url": null + } + }, + { + "17": { + "title": "Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response.", + "author": "AD Hoover, Valentina Kouznetsova, and Michael Goldbaum.", + "venue": "IEEE Transactions Medical Imaging, 19(3):203\u2013210, 2000.", + "url": null + } + }, + { + "18": { + "title": "Searching for mobilenetv3.", + "author": "Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 1314\u20131324, 2019.", + "url": null + } + }, + { + "19": { + "title": "AS-Net: Attention Synergy Network for skin lesion segmentation.", + "author": "Kai Hu, Jing Lu, Dongjin Lee, Dapeng Xiong, and Zhineng Chen.", + "venue": "Expert Systems with Applications, 201:117112, 2022.", + "url": null + } + }, + { + "20": { + "title": "Screening of glaucoma disease from retinal vessel images using semantic segmentation.", + "author": "Rakhshanda Imtiaz, Tariq M Khan, Syed Saud Naqvi, Muhammad Arsalan, and Syed Junaid Nawaz.", + "venue": "Computers & Electrical Engineering, 91:107036, 2021.", + "url": null + } + }, + { + "21": { + "title": "Ldmres-net: A lightweight neural network for efficient medical image segmentation on iot and edge devices.", + "author": "Shahzaib Iqbal, Tariq M Khan, Syed S Naqvi, Asim Naveed, Muhammad Usman, Haroon Ahmed Khan, and Imran Razzak.", + "venue": "IEEE Journal of Biomedical and Health Informatics, 2023.", + "url": null + } + }, + { + "22": { + "title": "G-net light: A lightweight modified google net for retinal vessel segmentation.", + "author": "Shahzaib Iqbal, Saud Naqvi, Haroon Ahmed, Ahsan Saadat, and Tariq M Khan.", + "venue": "In Photonics, volume 9, pages 923\u2013936. MDPI, 2022.", + "url": null + } + }, + { + "23": { + "title": "Ct-realistic lung nodule simulation from 3d conditional generative adversarial networks for robust lung segmentation.", + "author": "Dakai Jin, Ziyue Xu, Youbao Tang, Adam P Harrison, and Daniel J Mollura.", + "venue": "In Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11, pages 732\u2013740. Springer, 2018.", + "url": null + } + }, + { + "24": { + "title": "Dunet: A deformable network for retinal vessel segmentation.", + "author": "Qiangguo Jin, Zhaopeng Meng, Tuan D. Pham, Qi Chen, Leyi Wei, and Ran Su.", + "venue": "Knowledge-Based Systems, 178:149 \u2013 162, 2019.", + "url": null + } + }, + { + "25": { + "title": "Feature enhancer segmentation network (fes-net) for vessel segmentation.", + "author": "Tariq M Khan, Muhammad Arsalan, Shahzaib Iqbal, Imran Razzak, and Erik Meijering.", + "venue": "arXiv preprint arXiv:2309.03535, 2023.", + "url": null + } + }, + { + "26": { + "title": "Simple and robust depth-wise cascaded network for polyp segmentation.", + "author": "Tariq M Khan, Muhammad Arsalan, Imran Razzak, and Erik Meijering.", + "venue": "Engineering Applications of Artificial Intelligence, 121:106023, 2023.", + "url": null + } + }, + { + "27": { + "title": "Mkis-net: a light-weight multi-kernel network for medical image segmentation.", + "author": "Tariq M Khan, Muhammad Arsalan, Antonio Robles-Kelly, and Erik Meijering.", + "venue": "In International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1\u20138. 10.1109/DICTA56598.2022.10034573, 2022.", + "url": null + } + }, + { + "28": { + "title": "Width-wise vessel bifurcation for improved retinal vessel segmentation.", + "author": "Tariq M Khan, Mohammad AU Khan, Naveed Ur Rehman, Khuram Naveed, Imran Uddin Afridi, Syed Saud Naqvi, and Imran Raazak.", + "venue": "Biomedical Signal Processing and Control, 71:103169, 2022.", + "url": null + } + }, + { + "29": { + "title": "Leveraging image complexity in macro-level neural network design for medical image segmentation.", + "author": "Tariq M Khan, Syed S Naqvi, and Erik Meijering.", + "venue": "Scientific Reports, 12(1):22286, 2022.", + "url": null + } + }, + { + "30": { + "title": "Esdmr-net: A lightweight network with expand-squeeze and dual multiscale residual connections for medical image segmentation.", + "author": "Tariq M Khan, Syed S Naqvi, and Erik Meijering.", + "venue": "arXiv preprint arXiv:2312.10585, 2023.", + "url": null + } + }, + { + "31": { + "title": "T-net: A resource-constrained tiny convolutional neural network for medical image segmentation.", + "author": "Tariq M Khan, Antonio Robles-Kelly, and Syed S Naqvi.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 644\u2013653, 2022.", + "url": null + } + }, + { + "32": { + "title": "Residual multiscale full convolutional network (RM-FCN) for high resolution semantic segmentation of retinal vasculature.", + "author": "Tariq M Khan, Antonio Robles-Kelly, Syed S Naqvi, and Arsalan Muhammad.", + "venue": "In Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshops, page 324\u2013333, 2021.", + "url": null + } + }, + { + "33": { + "title": "A multi-organ nucleus segmentation challenge.", + "author": "Neeraj Kumar, Ruchika Verma, Deepak Anand, Yanning Zhou, Omer Fahri Onder, Efstratios Tsougenis, Hao Chen, Pheng-Ann Heng, Jiahui Li, Zhiqiang Hu, Yunzhi Wang, Navid Alemi Koohbanani, Mostafa Jahanifar, Neda Zamani Tajeddin, Ali Gooya, Nasir Rajpoot, Xuhua Ren, Sihang Zhou, Qian Wang, Dinggang Shen, Cheng-Kun Yang, Chi-Hung Weng, Wei-Hsiang Yu, Chao-Yuan Yeh, Shuang Yang, Shuoyu Xu, Pak Hei Yeung, Peng Sun, Amirreza Mahbod, Gerald Schaefer, Isabella Ellinger, Rupert Ecker, Orjan Smedby, Chunliang Wang, Benjamin Chidester, That-Vinh Ton, Minh-Triet Tran, Jian Ma, Minh N. Do, Simon Graham, Quoc Dang Vu, Jin Tae Kwak, Akshaykumar Gunda, Raviteja Chunduri, Corey Hu, Xiaoyang Zhou, Dariush Lotfi, Reza Safdari, Antanas Kascenas, Alison O\u2019Neil, Dennis Eschweiler, Johannes Stegmaier, Yanping Cui, Baocai Yin, Kailin Chen, Xinmei Tian, Philipp Gruening, Erhardt Barth, Elad Arbel, Itay Remer, Amir Ben-Dor, Ekaterina Sirazitdinova, Matthias Kohl, Stefan Braunewell, Yuexiang Li, Xinpeng Xie, Linlin Shen, Jun Ma,\nKrishanu Das Baksi, Mohammad Azam Khan, Jaegul Choo, Adri\u00e1n Colomer, Valery Naranjo, Linmin Pei, Khan M. Iftekharuddin, Kaushiki Roy, Debotosh Bhattacharjee, Anibal Pedraza, Maria Gloria Bueno, Sabarinathan Devanathan, Saravanan Radhakrishnan, Praveen Koduganty, Zihan Wu, Guanyu Cai, Xiaojie Liu, Yuqin Wang, and Amit Sethi.", + "venue": "IEEE Transactions on Medical Imaging, 39(5):1380\u20131391, 2020.", + "url": null + } + }, + { + "34": { + "title": "M2U-Net: Effective and efficient retinal vessel segmentation for real-world applications.", + "author": "Tim Laibacher, Tillman Weyde, and Sepehr Jalali.", + "venue": "In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 115\u2013124, 2019.", + "url": null + } + }, + { + "35": { + "title": "Skin lesion segmentation via generative adversarial networks with dual discriminators.", + "author": "Baiying Lei, Zaimin Xia, Feng Jiang, Xudong Jiang, Zongyuan Ge, Yanwu Xu, Jing Qin, Siping Chen, Tianfu Wang, and Shuqiang Wang.", + "venue": "Medical Image Analysis, 64:101716, 2020.", + "url": null + } + }, + { + "36": { + "title": "Lightweight u-net for lesion segmentation in ultrasound images.", + "author": "Yingping Li, Emilie Chouzenoux, Benoit Charmettant, Baya Benatsou, Jean-Philippe Lamarque, and Nathalie Lassau.", + "venue": "In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pages 611\u2013615, 2021.", + "url": null + } + }, + { + "37": { + "title": "Wave-Net: A lightweight deep network for retinal vessel segmentation from fundus images.", + "author": "Yanhong Liu, Ji Shen, Lei Yang, Hongnian Yu, and Guibin Bian.", + "venue": "Computers in Biology and Medicine, page 106341, 2022.", + "url": null + } + }, + { + "38": { + "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design.", + "author": "Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun.", + "venue": "In Proceedings of the European conference on computer vision (ECCV), pages 116\u2013131, 2018.", + "url": null + } + }, + { + "39": { + "title": "Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors.", + "author": "Dhiraj Maji, Prarthana Sigedar, and Munendra Singh.", + "venue": "Biomedical Signal Processing and Control, 71:103077, 2022.", + "url": null + } + }, + { + "40": { + "title": "Deep retinal image understanding.", + "author": "Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Pablo Arbel\u00e1ez, and Luc Van Gool.", + "venue": "In Medical Image Computing and Computer-Assisted Intervention, pages 140\u2013148, 2016.", + "url": null + } + }, + { + "41": { + "title": "Self-supervised spatial\u2013temporal transformer fusion based federated framework for 4d cardiovascular image segmentation.", + "author": "Moona Mazher, Imran Razzak, Abdul Qayyum, M Tanveer, Susann Beier, Tariq Khan, and Steven A Niederer.", + "venue": "Information Fusion, 106:102256, 2024.", + "url": null + } + }, + { + "42": { + "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation.", + "author": "Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi.", + "venue": "In 2016 fourth international conference on 3D vision (3DV), pages 565\u2013571. Ieee, 2016.", + "url": null + } + }, + { + "43": { + "title": "Towards automated eye diagnosis: An improved retinal vessel segmentation framework using ensemble block matching 3d filter.", + "author": "Khuram Naveed, Faizan Abdullah, Hussain Ahmad Madni, Mohammad AU Khan, Tariq M Khan, and Syed Saud Naqvi.", + "venue": "Diagnostics, 11(1):114, 2021.", + "url": null + } + }, + { + "44": { + "title": "Attention U-Net: Learning where to look for the pancreas.", + "author": "Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al.", + "venue": "arXiv:1804.03999, 2018.", + "url": null + } + }, + { + "45": { + "title": "Semi-supervised 3d-inceptionnet for segmentation and survival prediction of head and neck primary cancers.", + "author": "Abdul Qayyum, Moona Mazher, Tariq Khan, and Imran Razzak.", + "venue": "Engineering Applications of Artificial Intelligence, 117:105590, 2023.", + "url": null + } + }, + { + "46": { + "title": "Two-stage self-supervised contrastive learning aided transformer for real-time medical image segmentation.", + "author": "Abdul Qayyum, Imran Razzak, Moona Mazher, Tariq Khan, Weiping Ding, and Steven Niederer.", + "venue": "IEEE Journal of Biomedical and Health Informatics, 2023.", + "url": null + } + }, + { + "47": { + "title": "Unsupervised unpaired multiple fusion adaptation aided with self-attention generative adversarial network for scar tissues segmentation framework.", + "author": "Abdul Qayyum, Imran Razzak, Moona Mazher, Xuequan Lu, and Steven A Niederer.", + "venue": "Information Fusion, 106:102226, 2024.", + "url": null + } + }, + { + "48": { + "title": "ERFNet: Efficient residual factorized ConvNet for real-time semantic segmentation.", + "author": "E. Romera, J. M. \u00c1lvarez, L. M. Bergasa, and R. Arroyo.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 19(1):263\u2013272, 2018.", + "url": null + } + }, + { + "49": { + "title": "U-Net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 234\u2013241, 2015.", + "url": null + } + }, + { + "50": { + "title": "Medical image synthesis for data augmentation and anonymization using generative adversarial networks.", + "author": "Hoo-Chang Shin, Neil A Tenenholtz, Jameson K Rogers, Christopher G Schwarz, Matthew L Senjem, Jeffrey L Gunter, Katherine P Andriole, and Mark Michalski.", + "venue": "In Simulation and Synthesis in Medical Imaging: Third International Workshop, SASHIMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 3, pages 1\u201311. Springer, 2018.", + "url": null + } + }, + { + "51": { + "title": "BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation.", + "author": "G. Song, W. Kai, K. Hong, Z. Yujun, G. Yingqi, and L. Tao.", + "venue": "International Journal of Medical Informatics, 126:105 \u2013 113, 2019.", + "url": null + } + }, + { + "52": { + "title": "Impact of ica-based image enhancement technique on retinal blood vessels segmentation.", + "author": "Toufique Ahmed Soomro, Tariq Mahmood Khan, Mohammad AU Khan, Junbin Gao, Manoranjan Paul, and Lihong Zheng.", + "venue": "IEEE Access, 6:3524\u20133538, 2018.", + "url": null + } + }, + { + "53": { + "title": "Ridge-based vessel segmentation in color images of the retina.", + "author": "Joes Staal, Michael D Abr\u00e0moff, Meindert Niemeijer, Max A Viergever, and Bram Van Ginneken.", + "venue": "IEEE Transactions Medical Imaging, 23(4):501\u2013509, 2004.", + "url": null + } + }, + { + "54": { + "title": "Discriminating retinal microvascular and neuronal differences related to migraines: Deep learning based crossectional study.", + "author": "Feilong Tang, Matt Trinh, Annita Duong, Angelica Ly, Fiona Stapleton, Zhe Chen, Zongyuan Ge, and Imran Razzak.", + "venue": "arXiv preprint arXiv:2408.07293, 2024.", + "url": null + } + }, + { + "55": { + "title": "Sight for sore heads\u2013using cnns to diagnose migraines.", + "author": "Matt Trinh, Feilong Tang, Angelica Ly, Annita Duong, Fiona Stapleton, Zongyuan Ge, and Imran Razzak.", + "venue": "Investigative Ophthalmology & Visual Science, 65(9):PB0010\u2013PB0010, 2024.", + "url": null + } + }, + { + "56": { + "title": "Uctransnet: Rethinking the skip connections in u-net from a channel-wise perspective with transformer.", + "author": "Haonan Wang, Peng Cao, Jiaqi Wang, and Osmar R. Zaiane.", + "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 36(3):2441\u20132449, Jun. 2022.", + "url": null + } + }, + { + "57": { + "title": "Orientation and context entangled network for retinal vessel segmentation, 2022.", + "author": "Xinxu Wei, Kaifu Yang, Danilo Bzdok, and Yongjie Li.", + "venue": null, + "url": null + } + }, + { + "58": { + "title": "FAT-Net: Feature adaptive transformers for automated skin lesion segmentation.", + "author": "Huisi Wu, Shihuai Chen, Guilian Chen, Wei Wang, Baiying Lei, and Zhenkun Wen.", + "venue": "Medical Image Analysis, 76:102327, 2022.", + "url": null + } + }, + { + "59": { + "title": "Multiscale network followed network model for retinal vessel segmentation.", + "author": "Y. Wu, Y. Xia, Y. Song, Y. Zhang, and W. Cai.", + "venue": "In Medical Image Computing and Computer Assisted Intervention, pages 119\u2013126, 2018.", + "url": null + } + }, + { + "60": { + "title": "Vessel-Net: Retinal vessel segmentation under multi-path supervision.", + "author": "Yicheng Wu, Yong Xia, Yang Song, Donghao Zhang, Dongnan Liu, Chaoyi Zhang, and Weidong Cai.", + "venue": "In Medical Image Computing and Computer Assisted Intervention, pages 264\u2013272, 2019.", + "url": null + } + }, + { + "61": { + "title": "Bio-net: Learning recurrent bi-directional connections for encoder-decoder architecture.", + "author": "Tiange Xiang, Chaoyi Zhang, Dongnan Liu, Yang Song, Heng Huang, and Weidong Cai.", + "venue": "In Anne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, and Leo Joskowicz, editors, Medical Image Computing and Computer Assisted Intervention \u2013 MICCAI 2020, pages 74\u201384, Cham, 2020. Springer International Publishing.", + "url": null + } + }, + { + "62": { + "title": "Segan: Adversarial network with multi-scale l 1 loss for medical image segmentation.", + "author": "Yuan Xue, Tao Xu, Han Zhang, L Rodney Long, and Xiaolei Huang.", + "venue": "Neuroinformatics, 16:383\u2013392, 2018.", + "url": null + } + }, + { + "63": { + "title": "A three-stage deep learning model for accurate retinal vessel segmentation.", + "author": "Z. Yan, X. Yang, and K. Cheng.", + "venue": "IEEE Journal of Biomedical and Health Informatics, 23(4):1427\u20131436, 2019.", + "url": null + } + }, + { + "64": { + "title": "Unet++: A nested U-Net architecture for medical image segmentation.", + "author": "Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang.", + "venue": "In Deep Learning in Medical Image Analysis (DLMIA) & Multimodal Learning for Clinical Decision Support (ML-CDS) Held in Conjunction with MICCAI, pages 3\u201311, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2405.17520v4" +} \ No newline at end of file