{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "87dac6cf",
   "metadata": {},
   "source": [
    "# Comp1-传统组学\n",
    "\n",
    "主要适配于传统组学的建模和刻画。典型的应用场景探究rad_score最最终临床诊断的作用。\n",
    "\n",
    "数据的一般形式为(具体文件,文件夹名可以不同)：\n",
    "1. `images`文件夹，存放研究对象所有的CT、MRI等数据。\n",
    "2. `masks`文件夹, 存放手工（Manuelly）勾画的ROI区域。与images文件夹的文件意义对应。\n",
    "3. `label.txt`文件，每个患者对应的标签，例如肿瘤的良恶性、5年存活状态等。\n",
    "\n",
    "## Onekey步骤\n",
    "\n",
    "1. 数据校验，检查数据格式是否正确。\n",
    "2. 组学特征提取，如果第一步检查数据通过，则提取对应数据的特征。\n",
    "3. 读取标注数据信息。\n",
    "4. 特征与标注数据拼接。形成数据集。\n",
    "5. 查看一些统计信息，检查数据时候存在异常点。\n",
    "6. 正则化，将数据变化到服从 N~(0, 1)。\n",
    "7. 通过相关系数，例如spearman、person等筛选出特征。\n",
    "8. 构建训练集和测试集，这里使用的是随机划分，正常多中心验证，需要大家根据自己的场景构建两份数据。\n",
    "9. 通过Lasso筛选特征，选取其中的非0项作为后续模型的特征。\n",
    "10. 使用机器学习算法，例如LR、SVM、RF等进行任务学习。\n",
    "11. 模型结果可视化，例如AUC、ROC曲线，混淆矩阵等。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "81f47352",
   "metadata": {},
   "source": [
    "### 指定数据\n",
    "\n",
    "此模块有3个需要自己定义的参数\n",
    "\n",
    "1. `mydir`: 数据存放的路径。\n",
    "2. `labelf`: 每个样本的标注信息文件。\n",
    "3. `labels`: 要让AI系统学习的目标，例如肿瘤的良恶性、T-stage等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c779d76",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import pandas as pd\n",
    "from IPython.display import display\n",
    "os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'\n",
    "from onekey_algo import OnekeyDS as okds\n",
    "from onekey_algo import get_param_in_cwd\n",
    "\n",
    "os.makedirs('img', exist_ok=True)\n",
    "os.makedirs('results', exist_ok=True)\n",
    "os.makedirs('features', exist_ok=True)\n",
    "\n",
    "# 设置任务Task前缀\n",
    "task_type = 'Radiomics_'\n",
    "# 设置数据目录\n",
    "# mydir = r'你自己数据的路径'\n",
    "mydir = get_param_in_cwd('radio_dir')\n",
    "if mydir == okds.ct:\n",
    "    print(f'正在使用Onekey数据：{okds.ct}，如果不符合预期，请修改目录位置！')\n",
    "# 对应的标签文件\n",
    "group_info = get_param_in_cwd('dataset_column') or 'group'\n",
    "labelf = get_param_in_cwd('label_file') or os.path.join(mydir, 'label.csv')\n",
    "# 读取标签数据列名\n",
    "labels = [get_param_in_cwd('task_column') or 'label']"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9e2a7b3",
   "metadata": {},
   "source": [
    "### images和masks匹配\n",
    "\n",
    "这里要求images和masks文件夹中的文件名必须一一对应。e.g. `1.nii.gz`为images中的一个文件，在masks文件夹必须也存在一个`1.nii.gz`文件。\n",
    "\n",
    "当然也可以使用自定义的函数，获取解析数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e865bf67",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "from onekey_algo.custom.components.Radiology import diagnose_3d_image_mask_settings, get_image_mask_from_dir\n",
    "\n",
    "# 生成images和masks对，一对一的关系。也可以自定义替换。\n",
    "images, masks = get_image_mask_from_dir(mydir, images=f'images', masks=f'masks')\n",
    "# diagnose_3d_image_mask_settings(images, masks, verbose=True, assume_masks=[0, 1])\n",
    "print(f'获取到{len(images)}个样本。')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "deed41d5",
   "metadata": {},
   "source": [
    "# 传统组学特征\n",
    "\n",
    "使用pyradiomics提取传统组学特征，正常这块不需要修改，下面是具体的Onekey封装的接口。\n",
    "\n",
    "```python\n",
    "def extract(self, images: Union[str, List[str]], \n",
    "            masks: Union[str, List[str]], labels: Union[int, List[int]] = 1, settings=None)\n",
    "\"\"\"\n",
    "    * images: List结构，待提取的图像列表。\n",
    "    * masks: List结构，待提取的图像对应的mask，与Images必须一一对应。\n",
    "    * labels: 提取标注为什么标签的特征。默认为提取label=1的。\n",
    "    * settings: 其他提取特征的参数。默认为None。\n",
    "\n",
    "\"\"\"\n",
    "```\n",
    "\n",
    "```python\n",
    "def get_label_data_frame(self, label: int = 1, column_names=None, images='images', masks='labels')\n",
    "\"\"\"\n",
    "    * label: 获取对应label的特征。\n",
    "    * columns_names: 默认为None，使用程序设定的列名即可。\n",
    "\"\"\"\n",
    "```\n",
    "    \n",
    "```python\n",
    "def get_image_mask_from_dir(root, images='images', masks='labels')\n",
    "\"\"\"\n",
    "    * root: 待提取特征的目录。\n",
    "    * images: root目录中原始数据的文件夹名。\n",
    "    * masks: root目录中标注数据的文件夹名。\n",
    "\"\"\"\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5775ad9f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import warnings\n",
    "import pandas as pd\n",
    " \n",
    "warnings.filterwarnings(\"ignore\")\n",
    "\n",
    "from onekey_algo.custom.components.Radiology import ConventionalRadiomics\n",
    "\n",
    "rad_ = None\n",
    "if os.path.exists(f'features/rad_features.csv'):\n",
    "    rad_data = pd.read_csv(f'features/rad_features.csv', header=0)\n",
    "else:\n",
    "    images, masks = get_image_mask_from_dir(mydir, images=f'images', masks=f'masks')\n",
    "    # 如果要自定义一些特征提取方式，可以使用param_file。\n",
    "    param_file = get_param_in_cwd('extractor_settings')\n",
    "    radiomics = ConventionalRadiomics(param_file, correctMask=True)\n",
    "    radiomics.extract(images, masks, workers=1, \n",
    "                      with_fd=get_param_in_cwd('with_fd'), \n",
    "                      with_top=get_param_in_cwd('with_top'),\n",
    "                      with_hessian=get_param_in_cwd('with_hessian'))\n",
    "    rad_data = radiomics.get_label_data_frame(label=1)\n",
    "    rad_data.columns = [f\"intra_{c.replace('-', '_')}\" if c != 'ID' else c for c in rad_data.columns]\n",
    "    rad_data.to_csv(f'features/rad_features.csv', header=True, index=False)\n",
    "if rad_ is None:\n",
    "    rad_ = rad_data\n",
    "else:\n",
    "    rad_ = pd.merge(rad_, rad_data, on='ID', how='inner')\n",
    "rad_data = rad_\n",
    "rad_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd52a592",
   "metadata": {},
   "source": [
    "## 特征统计"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d3413c7d",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "sorted_counts = pd.DataFrame([c.split('_')[-2] for c in rad_data.columns if c !='ID']).value_counts()\n",
    "sorted_counts = pd.DataFrame(sorted_counts, columns=['count']).reset_index()\n",
    "sorted_counts = sorted_counts.sort_values(0)\n",
    "sorted_counts.columns = ['feature_group', 'count']\n",
    "display(sorted_counts)\n",
    "\n",
    "plt.figure(figsize=(20, 10))\n",
    "ax = plt.subplot(121)\n",
    "plt.pie(sorted_counts['count'], labels=[i for i in sorted_counts['feature_group']], startangle=0,\n",
    "        counterclock = False, autopct = '%.1f%%')\n",
    "# plt.bar_label(bar.containers[0])\n",
    "ax = plt.subplot(122)\n",
    "ax.spines['top'].set_visible(False)\n",
    "ax.spines['right'].set_visible(False)\n",
    "bar = sns.barplot(data=sorted_counts, x='feature_group', y='count', )\n",
    "plt.savefig(f'img/{task_type}feature_ratio.svg', bbox_inches = 'tight')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9715d329",
   "metadata": {},
   "source": [
    "## 标注数据\n",
    "\n",
    "数据以csv格式进行存储，这里如果是其他格式，可以使用自定义函数读取出每个样本的结果。\n",
    "\n",
    "要求label_data为一个`DataFrame`格式，包括ID列以及后续的labels列，可以是多列，支持Multi-Task。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a76b576c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from onekey_algo.custom.components.comp1 import fillna\n",
    "label_data = pd.read_csv(labelf, dtype={'ID': str})\n",
    "label_data['ID'] = label_data['ID'].map(lambda x: f\"{x}.nii.gz\" if not (f\"{x}\".endswith('.nii.gz') or  f\"{x}\".endswith('.nii')) else x)\n",
    "label_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92814b27",
   "metadata": {},
   "source": [
    "## 特征拼接 \n",
    "\n",
    "将标注数据`label_data`与`rad_data`进行合并，得到训练数据。\n",
    "\n",
    "**注意：** \n",
    "1. 需要删掉ID这一列\n",
    "2. 如果发现数据少了，需要自行检查数据是否匹配。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "78982ba4",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "from onekey_algo.custom.utils import print_join_info\n",
    "event_col = get_param_in_cwd('event_col')\n",
    "duration_col= get_param_in_cwd('duration_col')\n",
    "\n",
    "print_join_info(rad_data, label_data)\n",
    "combined_data = pd.merge(rad_data, label_data[['ID', event_col, duration_col, 'group']], on=['ID'], how='inner')\n",
    "print(combined_data[event_col].value_counts())\n",
    "combined_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d6fc73b",
   "metadata": {},
   "source": [
    "## 正则化\n",
    "\n",
    "`normalize_df` 为onekey中正则化的API，将数据变化到0均值1方差。正则化的方法为\n",
    "\n",
    "$column = \\frac{column - mean}{std}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "be06d94d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from onekey_algo.custom.components.comp1 import normalize_df\n",
    "data = normalize_df(combined_data, not_norm=['ID', event_col, duration_col], group='group', use_train=True)\n",
    "data = data.dropna(axis=1)\n",
    "data.describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "803954f4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from onekey_algo.custom.components.comp1 import normalize_df, select_feature\n",
    "\n",
    "corr_name = get_param_in_cwd('corr_name', 'pearson')\n",
    "if os.path.exists(f'features/{task_type}rad_features_sel.csv') and False:\n",
    "    data = pd.read_csv(f'features/{task_type}rad_features_sel.csv', header=0)\n",
    "else:\n",
    "    tgroup = data[data['group'] == 'train']\n",
    "    sel_feature = select_feature(tgroup[[c for c in tgroup.columns if c not in [event_col, duration_col]]].corr(corr_name), \n",
    "                                 threshold=0.9, topn=2048, verbose=False)\n",
    "    data = data[['ID'] + sel_feature + [event_col, duration_col, 'group']]\n",
    "    data.to_csv(f'features/{task_type}rad_features_sel.csv', header=True, index=False)\n",
    "data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c4be710",
   "metadata": {},
   "source": [
    "## 构建数据\n",
    "\n",
    "将样本的训练数据X与监督信息y分离出来，并且对训练数据进行划分，一般的划分原则为80%-20%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f8058ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import onekey_algo.custom.components as okcomp\n",
    "from collections import OrderedDict\n",
    "\n",
    "\n",
    "train_data = data[(data[group_info] == 'train')]\n",
    "\n",
    "# subsets = [s for s in label_data['group'].value_counts().index if s != 'train']\n",
    "subsets = get_param_in_cwd('subsets')\n",
    "val_datasets = OrderedDict()\n",
    "for subset in subsets:\n",
    "    val_data = data[data[group_info] == subset]\n",
    "    val_datasets[subset] = val_data\n",
    "    val_data.to_csv(f'features/{task_type}{subset}_features_norm.csv', index=False)\n",
    "\n",
    "print('，'.join([f\"{subset}样本数：{d_.shape}\" for subset, d_ in val_datasets.items()]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06d6c8ca",
   "metadata": {},
   "source": [
    "# 单因素Cox\n",
    "\n",
    "如果你觉得特征比较多\n",
    "\n",
    "**verbose**: 默认False，不输出日志。\n",
    "\n",
    "**topk**：可以自己指定保存的topk个特征。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "97072c46",
   "metadata": {},
   "outputs": [],
   "source": [
    "from onekey_algo.custom.components.survival import uni_cox\n",
    "\n",
    "if os.path.exists(f'features/{task_type}rad_features_unisel.csv') and False:\n",
    "    train_data = pd.read_csv(f'features/{task_type}rad_features_unisel.csv')\n",
    "else:\n",
    "    sel_features = uni_cox(train_data, duration_col=duration_col, event_col=event_col,\n",
    "                           cols=[c for c in train_data.columns if c not in [event_col, duration_col, 'ID', 'group']], \n",
    "                           verbose=False, pvalue_thres=get_param_in_cwd('pvalue', 0.05), topk=16)\n",
    "    train_data = train_data[['ID'] + sel_features + [event_col, duration_col, 'group']]\n",
    "    train_data.to_csv(f'features/{task_type}rad_features_unisel.csv', header=True, index=False)\n",
    "train_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14751e1b",
   "metadata": {},
   "source": [
    "# Cox-Lasso特征筛选"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "33550fd5",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "from onekey_algo.custom.components.survival import get_x_y_survival, lasso_cox_cv\n",
    "COEF_THRESHOLD = 1e-6\n",
    "\n",
    "X, y = get_x_y_survival(train_data, val_outcome=1, event_col=event_col, duration_col=duration_col)\n",
    "sel_features = lasso_cox_cv(X, y, max_iter=100,  norm_X=False, prefix=f\"{task_type}\", l1_ratio=0.5, cv=10, weights_fig_size=(10, 15))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37cfedef",
   "metadata": {},
   "source": [
    "### 筛选特征\n",
    "\n",
    "使用Lasso-Cox之后的特征。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8f82dc84",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_data = train_data[['ID'] + list(sel_features.index) + [event_col, duration_col]]\n",
    "for subset in subsets:\n",
    "    val_datasets[subset] = val_datasets[subset][['ID'] + list(sel_features.index) + [event_col, duration_col]]\n",
    "    val_datasets[subset].to_csv(f'features/{task_type}{subset}_cox.csv', index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acebd04f",
   "metadata": {},
   "source": [
    "### 聚类分析\n",
    "\n",
    "通过修改变量名，可以可视化不同相关系数下的相聚类分析矩阵。\n",
    "\n",
    "注意：当特征特别多的时候（大于100），尽量不要可视化，否则运行时间会特别长。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b590e44e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "if train_data.shape[1] < 150:\n",
    "    pp = sns.clustermap(train_data[[c for c in train_data.columns if c not in [event_col, duration_col]]].corr(corr_name), \n",
    "                        linewidths=.5, figsize=(20.0, 16.0), cmap='YlGnBu')\n",
    "    plt.setp(pp.ax_heatmap.get_yticklabels(), rotation=0)\n",
    "    plt.savefig(f'img/{task_type}feature_cluster.svg', bbox_inches = 'tight')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f6a36d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from lifelines import CoxPHFitter\n",
    "\n",
    "cph = CoxPHFitter(penalizer=0.1)\n",
    "cph.fit(train_data[[c for c in train_data.columns if c != 'ID']], duration_col=duration_col, event_col=event_col)\n",
    "cph.print_summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "585adbaf",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(cph.concordance_index_)\n",
    "su = cph.summary[['exp(coef)', 'exp(coef) lower 95%', 'exp(coef) upper 95%', 'p']]\n",
    "su.columns = ['HR', 'HR lower 95%', 'HR upper 95%', 'pvalue']\n",
    "su.reset_index().to_csv(f'features/{task_type}features_HR.csv', index=False)\n",
    "su"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c4d644fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "plt.figure(figsize=(10, 8))\n",
    "cph.plot(hazard_ratios=True)\n",
    "plt.savefig(f'img/{task_type}feature_pvalue.svg')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fdfc0479",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "from lifelines import CoxPHFitter\n",
    "from lifelines.statistics import logrank_test\n",
    "from lifelines import KaplanMeierFitter\n",
    "from lifelines.plotting import add_at_risk_counts\n",
    "\n",
    "thres = 1e-4\n",
    "bst_split = {'train': 2.0, 'val':1, 'test': 1}\n",
    "loc = {'train': 0.2, 'val':0.5, 'test': 0.5}\n",
    "for subset, test_data in val_datasets.items():\n",
    "    c_index = cph.score(test_data[[c for c in test_data.columns if c != 'ID']], scoring_method=\"concordance_index\")\n",
    "    # 根据模型预测的生存时间与人群中位数进行分组\n",
    "    # y_pred = cph.predict_median(test_data[[c for c in test_data.columns if c != 'ID']])\n",
    "    # cox_data = pd.concat([test_data, y_pred], axis=1)\n",
    "    # mean = cox_data.describe()[0.5]['mean']\n",
    "    # cox_data['HR'] = cox_data[0.5] < mean\n",
    "\n",
    "    # 根据模型预测HR的分位数进行分组\n",
    "    y_pred = cph.predict_partial_hazard(test_data[[c for c in test_data.columns if c != 'ID']])\n",
    "    cox_data = pd.concat([test_data, y_pred], axis=1)\n",
    "    mean = cox_data.describe()[0]['50%']\n",
    "    cox_data['HR'] = cox_data[0] > mean\n",
    "    \n",
    "    # 根据模型预测HR进行分组\n",
    "#     cox_data['HR'] = cox_data[0] > 1\n",
    "#     cox_data['HR'] = cox_data[0] > bst_split[subset]\n",
    "\n",
    "    dem = (cox_data[\"HR\"] == True)\n",
    "    results = logrank_test(cox_data[duration_col][dem], cox_data[duration_col][~dem], \n",
    "                           event_observed_A=cox_data[event_col][dem], event_observed_B=cox_data[event_col][~dem])\n",
    "    p_value = f\"={results.p_value:.3f}\" if results.p_value > thres else f'<{thres}'\n",
    "    plt.title(f\"Cohort {subset} C-index:{c_index:.3f}\")\n",
    "    plt.ylabel('Probability')\n",
    "    if sum(dem):\n",
    "        kmf_high = KaplanMeierFitter()\n",
    "        kmf_high.fit(cox_data[duration_col][dem], event_observed=cox_data[event_col][dem], label=\"High Risk\")\n",
    "        kmf_high.plot_survival_function(color='r')\n",
    "    if sum(~dem):\n",
    "        kmf_low = KaplanMeierFitter()\n",
    "        kmf_low.fit(cox_data[duration_col][~dem], event_observed=cox_data[event_col][~dem], label=\"Low Risk\")\n",
    "        kmf_low.plot_survival_function(color='g')\n",
    "    plt.text(0.5, loc[subset] if subset in loc else 0.2, f\"P{p_value}\")\n",
    "    plt.xlabel('Time(months)')\n",
    "    add_at_risk_counts(kmf_high, kmf_low, rows_to_show=['At risk'])\n",
    "    plt.savefig(f'img/{task_type}KM_{subset}.svg', bbox_inches='tight')\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "662f86f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "\n",
    "def get_prediction(model: CoxPHFitter, data, ID=None, **kwargs):\n",
    "    hr = model.predict_partial_hazard(data)\n",
    "    expectation = model.predict_expectation(data)\n",
    "    \n",
    "    predictions = pd.concat([hr, expectation], axis=1)\n",
    "    predictions.columns = ['HR', 'expectation']\n",
    "    if ID is not None:\n",
    "        predictions = pd.concat([ID, hr, expectation], axis=1)\n",
    "        predictions.columns = ['ID', 'HR', 'expectation']\n",
    "    else:\n",
    "        predictions = pd.concat([hr, expectation], axis=1)\n",
    "        predictions.columns = ['HR', 'expectation']\n",
    "    return predictions\n",
    "\n",
    "os.makedirs('results', exist_ok=True)\n",
    "info = []\n",
    "for subset, test_data in val_datasets.items():\n",
    "    if subset in get_param_in_cwd('subsets'):\n",
    "        results = get_prediction(cph, test_data, ID=test_data['ID'])\n",
    "        results.to_csv(f'results/{task_type}cox_predictions_{subset}.csv', index=False)\n",
    "        results['group'] = subset\n",
    "        info.append(results)\n",
    "        pd.merge(results, label_data[['ID', event_col, duration_col]], on='ID', how='inner').to_csv(f'features/{task_type}4xtile_{subset}.txt', \n",
    "                                                                                                    index=False, sep='\\t')\n",
    "info = pd.concat(info, axis=0)\n",
    "info"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3cc110e8",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
