{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "98d4f33d",
   "metadata": {},
   "source": [
    "# README\n",
    "\n",
    "这是一个用Python编写的医学图像处理程序 **M**edical **I**mage **E**nhancement **T**ool **B**ox(MIETB) ，可以用于各种医学图像处理相关的任务。\n",
    "\n",
    "## 效果说明\n",
    "\n",
    "该程序可以实现以下功能：\n",
    "\n",
    "- 图像超分辨率重建\n",
    "    - scale 输入一个放大倍数，支持[2,4]倍\n",
    "    - 其他高倍数的算法定制，可以联系张老师，微信：OnekeyAI4U\n",
    "\n",
    "## 获取待提取特征的文件\n",
    "\n",
    "提供两种批量处理的模式：\n",
    "1. 目录模式，提取指定目录下的所有.nii.gz文件的特征。默认寻找目录下所有的nii.gz数据\n",
    "2. 文件模式，待提取的数据存储在文件中，每行一个样本。\n",
    "\n",
    "## 技术文档\n",
    "\n",
    "Medical imaging plays a crucial role in the diagnosis and treatment of various diseases. However, the spatial resolution of medical images is often limited due to various factors such as hardware limitations, acquisition time, and radiation exposure. This limitation can lead to difficulties in accurate diagnosis and treatment planning. Therefore, there is a need for techniques that can improve the spatial resolution of medical images. In recent years, deep learning-based super-resolution reconstruction techniques have shown promising results in improving the spatial resolution of medical images. Now, we will provide a 3D super-resolution reconstruction technique for medical images that utilizes a generative adversarial network (GAN) as its basic architecture.\n",
    "\n",
    "Super-resolution reconstruction is a technique that aims to improve the spatial resolution of an image beyond the physical limitations of the imaging system. In medical imaging, super-resolution reconstruction can be used to improve the quality of images obtained from various modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound. Super-resolution reconstruction techniques can be broadly classified into two categories: interpolation-based and learning-based. Interpolation-based techniques use mathematical models to estimate the high-resolution image from the low-resolution image. However, these techniques often result in blurry images with limited improvement in spatial resolution. Learning-based techniques, on the other hand, use deep learning models to learn the mapping between low-resolution and high-resolution images. These techniques have shown promising results in improving the spatial resolution of medical images.\n",
    "\n",
    "The 3D super-resolution reconstruction technique provided in our method utilizes a GAN as its basic architecture. GANs are a type of deep learning model that consists of two networks: a generator network and a discriminator network. The generator network generates high-resolution images from low-resolution images, while the discriminator network distinguishes between real and generated images. The two networks are trained in an adversarial manner, where the generator network tries to generate images that can fool the discriminator network, and the discriminator network tries to distinguish between real and generated images. This adversarial training process helps the generator network to learn the mapping between low-resolution and high-resolution images.\n",
    "\n",
    "The dataset used to train the 3D super-resolution reconstruction technique consists of millions of medical images. The images are preprocessed to remove noise and artifacts and to normalize the intensity values. The images are then divided into low-resolution and high-resolution pairs, where the low-resolution images are obtained by downsampling the high-resolution images. The pairs are used to train the GAN model.\n",
    "\n",
    "The loss function used in the GAN model consists of three components: gradient loss, L1 loss, and perceptual loss. The gradient loss encourages the generated images to have similar gradient values as the high-resolution images. The L1 loss measures the pixel-wise difference between the generated and high-resolution images. The perceptual loss measures the difference between the feature representations of the generated and high-resolution images obtained from a pre-trained deep learning model. The combination of these loss functions helps to ensure that the generated images are visually similar to the high-resolution images.\n",
    "\n",
    "The 3D super-resolution reconstruction technique we provided has shown promising results in improving the spatial resolution of medical images. For example, it can increase the spatial resolution by 4 times while maintaining the original image size. This means that a pixel volume of 1x1x1mm can be transformed into 1x1x0.25mm. The technique has been evaluated on various medical imaging modalities such as CT, MRI, and ultrasound, and has shown significant improvement in image quality and spatial resolution. The technique has also been compared with other state-of-the-art super-resolution reconstruction techniques and has shown superior performance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "88f3b9b2",
   "metadata": {},
   "outputs": [],
   "source": [
    "## 获得视频教程\n",
    "from onekey_algo.custom.Manager import onekey_show\n",
    "onekey_show('OnekeyComp-超分重建')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df6f2a55",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'\n",
    "from onekey_algo import OnekeyDS as okds\n",
    "from onekey_algo import get_param_in_cwd\n",
    "\n",
    "# 目录模式\n",
    "scale = get_param_in_cwd('scale', 4)\n",
    "mydir = get_param_in_cwd('rad_dir', os.path.join(okds.ct, 'images'))\n",
    "samples = []\n",
    "for r, ds, fs in os.walk(mydir):\n",
    "    samples.extend([os.path.join(r, p) for p in fs if p.endswith('.nii.gz')])\n",
    "\n",
    "# 文件模式\n",
    "# test_file = ''\n",
    "# with open(test_file) as f:\n",
    "#     test_samples = [l.strip() for l in f.readlines()]\n",
    "\n",
    "# 自定义模式\n",
    "# test_sampleses = ['path2nii.gz']\n",
    "samples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "577fbf57",
   "metadata": {},
   "outputs": [],
   "source": [
    "from onekey_algo.mietb.super_resolution.eval_super_res_reconstruction import init as init_super\n",
    "from onekey_algo.mietb.super_resolution.eval_super_res_reconstruction import inference as inference_super\n",
    "\n",
    "save_dir = get_param_in_cwd('save_dir', None)\n",
    "print(save_dir)\n",
    "model, device = init_super(scale)\n",
    "inference_super(samples, model, device, scale, save_dir=save_dir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2157e73d",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
