{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "325403a3-9be9-403f-a44f-36d20054789a",
   "metadata": {},
   "source": [
    "# SplitRec：在隐语拆分学习中使用 FeatureInferenceAttack\n",
    "在联邦学习中，攻击者可以通过监听训练模型过程中传输的数值和梯度信息，攻击对方模型或数据，在一定程度上推理出有用信息，造成信息泄露。\n",
    "\n",
    "本文考虑两方拆分学习中的特征推理攻击，将介绍[《Feature Inference Attacks on Model Predictions in Vertical Federated Learning》](https://arxiv.org/abs/2010.10152)中的 GRN 攻击方法在隐语中的使用。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e36b04cd-77dc-4fa6-937d-4a1f3b9e1f16",
   "metadata": {},
   "source": [
    "## Feature Inference Attack with Generative Regression Network\n",
    "特征推理攻击中，有标签的一方作为攻击方，推测对方的特征。在联邦模型训练之后，GRN 攻击方法通过一个生成回归网络（Generator Model）预测对方特征，并通过不断缩小预测特征在联邦模型的输出值和真实联邦模型输出值的差距，训练 Genertor Model，因而可以预测对方特征，如下图所示。\n",
    "\n",
    "![fia0](resources/fia0.png)\n",
    "\n",
    "其中 Generator Model 具体训练步骤如下：\n",
    "\n",
    "1. 将攻击方特征（蓝色）和随机生成的数据（橙色）输入到 Generator Model 中，输出值作为预测的对方特征\n",
    "2. 将攻击方特征和预测的对方特征输入已经训完的联邦模型中，计算 logit 输出\n",
    "3. 利用步骤 2 中输出的 logit 与真实 logit（攻击方特征和对方真实特征输入联邦模型计算的 logit）计算损失\n",
    "4. 对得到的损失进行反向传播，更新 Generator Model 参数\n",
    "\n",
    "算法伪代码如下：\n",
    "\n",
    "![fia1](resources/fia1.png)\n",
    "\n",
    "loss 函数定义如下：\n",
    "\n",
    "![fia2](resources/fia2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "883eac32-bce1-435e-9922-98fd944c4e4b",
   "metadata": {},
   "source": [
    "## 隐语中的攻击方法实现\n",
    "在隐语中攻击方法的实现是通过 callback 机制来完成。攻击算法基类 CallBack 位于 secretflow/ml/nn/sl/backend/torch/callback.py，我们在联邦模型训练的以下几个节点提供 hook，不同攻击方法可以通过将攻击算法实现在对应节点的 hook， 使攻击逻辑注入到联邦模型的训练过程中。\n",
    "\n",
    "- on_train_begin\n",
    "- on_train_end\n",
    "- on_epoch_begin\n",
    "- on_epoch_end\n",
    "- on_batch_begin\n",
    "- on_batch_end\n",
    "\n",
    "用户如果需要实现自定义的攻击方法，需要\n",
    "\n",
    "1. 定义 CustomAttacker 继承基类 Callback，将攻击逻辑实现到对应的 hook 函数中\n",
    "2. 定义 attacker_builder 函数将构建 attacker 写到其中\n",
    "3. 与普通 Split Learning 模型训练一样定义 sl_model, 并在调用 sl_model.fit() 时，将 callback_dict {party -> attacker_builder} 传入 callbacks 参数即可\n",
    "\n",
    "其中步骤 1 可以参考隐语中已有的 FeatureInferenceAttacker/LabelInferenceAttacker，步骤 2 和 3 可参考下面 FeatureInferenceAttacker 的使用方式。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2dc9a8f4-7335-4380-a8ab-f10d93548382",
   "metadata": {},
   "source": [
    "## Feature Inferece Attack 的隐语封装\n",
    "我们在隐语中提供了多种攻击方法的封装。对于论文中的攻击方法，我们提供了 FeatureInferenceAttacker 封装，具体使用可以参考以下代码。\n",
    "\n",
    "首先和一般 Split Learning 模型训练一样，我们将进行数据处理，并定义一个 SLModel。\n",
    "\n",
    "然后定义调用 FeatureInferenceAttacker 的 attacker_builder，并在 SLModel fit 时将 attacker_builder 传入进行训练和攻击。 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "417d8a72-1c4f-40b9-bbea-2e9c6ca8e86a",
   "metadata": {},
   "source": [
    "## 环境设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "6adf7fd9-cf8c-4dad-913f-512d3bfedade",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The version of SecretFlow: 1.1.0.dev20230926\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-09-26 19:49:40,269\tINFO worker.py:1538 -- Started a local Ray instance.\n"
     ]
    }
   ],
   "source": [
    "import secretflow as sf\n",
    "\n",
    "# Check the version of your SecretFlow\n",
    "print('The version of SecretFlow: {}'.format(sf.__version__))\n",
    "\n",
    "# In case you have a running secretflow runtime already.\n",
    "sf.shutdown()\n",
    "sf.init(['alice', 'bob'], address=\"local\")\n",
    "alice, bob = sf.PYU('alice'), sf.PYU('bob')\n",
    "device_y = alice"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7afcf4b7-c795-4c12-b725-7607f5b15018",
   "metadata": {},
   "source": [
    "## 数据集介绍\n",
    "这里我们使用 UCI Sensorless Drive Diagnosis 数据集，该数据集有 48 维特征 11 分类。\n",
    "\n",
    "这里我们对数据进行纵向切分，攻击方持有 28 维特征和 label，被攻击方持有 20 维特征。\n",
    "\n",
    "[数据集官网](http://archive.ics.uci.edu/dataset/325/dataset+for+sensorless+drive+diagnosis)\n",
    "\n",
    "这里可以下载论文代码数据集： [drive_cleaned.csv](https://raw.githubusercontent.com/xinjianluo/featureinference-vfl/master/datasets/drive_cleaned.csv)\n",
    "\n",
    "或直接使用我们提供的 demo 数据 drive_cleaned_demo.csv"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f5affd8-8fbc-4fdb-9d1c-ff73a0a18d82",
   "metadata": {},
   "source": [
    "## 准备数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "2ffa6eb0-a40a-4424-b59a-968711fa79d3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from secretflow.data.ndarray import FedNdarray, PartitionWay\n",
    "from secretflow.utils.simulation.datasets import _DATASETS, get_dataset\n",
    "\n",
    "\n",
    "def prepare_data():\n",
    "    data_path = get_dataset(_DATASETS['drive_cleaned'])\n",
    "    full_data_table = np.genfromtxt(data_path, delimiter=',')\n",
    "    samples = full_data_table[:, :-1].astype(np.float32)\n",
    "    labels = full_data_table[:, -1].astype(np.int64)\n",
    "\n",
    "    # permuate columns\n",
    "    batch, columns = samples.shape\n",
    "    permu_cols = np.random.permutation(columns)\n",
    "    samples = samples[:, permu_cols]\n",
    "\n",
    "    # normalize feature\n",
    "    fea_min = samples.min(axis=0)\n",
    "    fea_max = samples.max(axis=0)\n",
    "    samples = (samples - fea_min) / (fea_max - fea_min)\n",
    "    mean_attr = samples.mean(axis=0)\n",
    "\n",
    "    # split train, test, pred\n",
    "    random_selection = np.random.rand(samples.shape[0]) <= 0.6\n",
    "    train_sample = samples[random_selection]\n",
    "    train_label = labels[random_selection]\n",
    "    sample_left = samples[~random_selection]\n",
    "    label_left = labels[~random_selection]\n",
    "\n",
    "    random_selection = np.random.rand(sample_left.shape[0]) <= 0.5\n",
    "    test_sample = sample_left[random_selection]\n",
    "    test_label = label_left[random_selection]\n",
    "    pred_sample = sample_left[~random_selection]\n",
    "    pred_label = label_left[~random_selection]\n",
    "\n",
    "    return (\n",
    "        train_sample,\n",
    "        train_label,\n",
    "        test_sample,\n",
    "        test_label,\n",
    "        pred_sample,\n",
    "        pred_label,\n",
    "        mean_attr,\n",
    "    )\n",
    "\n",
    "\n",
    "(\n",
    "    train_fea,\n",
    "    train_label,\n",
    "    test_fea,\n",
    "    test_label,\n",
    "    pred_fea,\n",
    "    pred_label,\n",
    "    mean_attr,\n",
    ") = prepare_data()\n",
    "\n",
    "bob_mean = mean_attr[28:]\n",
    "\n",
    "fed_data = FedNdarray(\n",
    "    partitions={\n",
    "        alice: alice(lambda x: x[:, :28])(train_fea),\n",
    "        bob: bob(lambda x: x[:, 28:])(train_fea),\n",
    "    },\n",
    "    partition_way=PartitionWay.VERTICAL,\n",
    ")\n",
    "test_fed_data = FedNdarray(\n",
    "    partitions={\n",
    "        alice: alice(lambda x: x[:, :28])(test_fea),\n",
    "        bob: bob(lambda x: x[:, 28:])(test_fea),\n",
    "    },\n",
    "    partition_way=PartitionWay.VERTICAL,\n",
    ")\n",
    "test_data_label = device_y(lambda x: x)(test_label)\n",
    "\n",
    "label = device_y(lambda x: x)(train_label)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15d946f8-0038-4c59-b804-a119a016fc2a",
   "metadata": {},
   "source": [
    "## 定义 SL 模型结构"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "8bf15a00-2622-4d2b-865f-c5bbedf08797",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "from secretflow_fl.ml.nn.core.torch import BaseModule\n",
    "\n",
    "\n",
    "class SLBaseNet(BaseModule):\n",
    "    def __init__(self):\n",
    "        super(SLBaseNet, self).__init__()\n",
    "        self.linear = nn.Linear(10, 10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        y = x\n",
    "        return y\n",
    "\n",
    "    def output_num(self):\n",
    "        return 1\n",
    "\n",
    "\n",
    "class SLFuseModel(BaseModule):\n",
    "    def __init__(self, input_dim=48, output_dim=11):\n",
    "        super(SLFuseModel, self).__init__()\n",
    "        torch.manual_seed(1234)\n",
    "        self.dense = nn.Sequential(\n",
    "            nn.Linear(input_dim, 600),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(600, 300),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(300, 100),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(100, output_dim),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = torch.cat(x, dim=1)\n",
    "        return self.dense(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e74ac046-96a7-4aa4-9a01-46394e30b1fc",
   "metadata": {},
   "source": [
    "## 定义 SL Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "98d7ca2d-c10d-4ccb-8340-ba9fb799cafb",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:root:Create proxy actor <class 'secretflow_fl.ml.nn.sl.backend.torch.strategy.split_nn.PYUSLTorchModel'> with party alice.\n",
      "INFO:root:Create proxy actor <class 'secretflow_fl.ml.nn.sl.backend.torch.strategy.split_nn.PYUSLTorchModel'> with party bob.\n"
     ]
    }
   ],
   "source": [
    "import torch.optim as optim\n",
    "from torchmetrics import Accuracy, Precision\n",
    "from secretflow_fl.ml.nn.core.torch import metric_wrapper, optim_wrapper, TorchModel\n",
    "from secretflow_fl.ml.nn import SLModel\n",
    "\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss\n",
    "optim_fn = optim_wrapper(torch.optim.Adam)\n",
    "base_model = TorchModel(\n",
    "    model_fn=SLBaseNet,\n",
    "    loss_fn=loss_fn,\n",
    "    optim_fn=optim_fn,\n",
    "    metrics=[\n",
    "        metric_wrapper(Accuracy, task=\"multiclass\", num_classes=11, average='micro'),\n",
    "        metric_wrapper(Precision, task=\"multiclass\", num_classes=11, average='micro'),\n",
    "    ],\n",
    ")\n",
    "\n",
    "fuse_model = TorchModel(\n",
    "    model_fn=SLFuseModel,\n",
    "    loss_fn=loss_fn,\n",
    "    optim_fn=optim_fn,\n",
    "    metrics=[\n",
    "        metric_wrapper(Accuracy, task=\"multiclass\", num_classes=11, average='micro'),\n",
    "        metric_wrapper(Precision, task=\"multiclass\", num_classes=11, average='micro'),\n",
    "    ],\n",
    ")\n",
    "\n",
    "base_model_dict = {\n",
    "    alice: base_model,\n",
    "    bob: base_model,\n",
    "}\n",
    "\n",
    "sl_model = SLModel(\n",
    "    base_model_dict=base_model_dict,\n",
    "    device_y=device_y,\n",
    "    model_fuse=fuse_model,\n",
    "    dp_strategy_dict=None,\n",
    "    compressor=None,\n",
    "    simulation=True,\n",
    "    random_seed=1234,\n",
    "    backend='torch',\n",
    "    strategy='split_nn',\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f182424d-1ac6-4674-94ba-e0d20a8d670f",
   "metadata": {},
   "source": [
    "## 定义 attacker_builder\n",
    "### 定义 FeatureInferenceAttacker 中的 Generator Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "1493b80c-4988-4828-b2c2-139cf49f6fe3",
   "metadata": {},
   "outputs": [],
   "source": [
    "class Generator(nn.Module):\n",
    "    def __init__(self, latent_dim=48, target_dim=20):\n",
    "        super().__init__()\n",
    "        self.net = nn.Sequential(\n",
    "            nn.Linear(latent_dim, 600),\n",
    "            nn.LayerNorm(600),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(600, 200),\n",
    "            nn.LayerNorm(200),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(200, 100),\n",
    "            nn.LayerNorm(100),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(100, target_dim),\n",
    "            nn.Sigmoid(),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.net(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2982182-1da0-4ee0-825e-9db1c7c2738e",
   "metadata": {},
   "source": [
    "### 定义 FeatureInferenceAttacker 中的 data_builder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "5af36a38-c818-4c2d-be33-ff2d9686cac9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.utils.data import Dataset, DataLoader, TensorDataset\n",
    "\n",
    "\n",
    "def data_builder(data, label, batch_size):\n",
    "    def prepare_data():\n",
    "        alice_data = data[:, :28]\n",
    "        bob_data = data[:, 28:]\n",
    "\n",
    "        alice_dataset = TensorDataset(torch.tensor(alice_data))\n",
    "        alice_dataloader = DataLoader(\n",
    "            dataset=alice_dataset,\n",
    "            shuffle=False,\n",
    "            batch_size=batch_size,\n",
    "        )\n",
    "\n",
    "        bob_dataset = TensorDataset(torch.tensor(bob_data))\n",
    "        bob_dataloader = DataLoader(\n",
    "            dataset=bob_dataset,\n",
    "            shuffle=False,\n",
    "            batch_size=batch_size,\n",
    "        )\n",
    "\n",
    "        dataloader_dict = {'alice': alice_dataloader, 'bob': bob_dataloader}\n",
    "        return dataloader_dict, dataloader_dict\n",
    "\n",
    "    return prepare_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed700079-748b-418a-a894-e599de0c3f44",
   "metadata": {},
   "source": [
    "### 定义 attacker_builder\n",
    "这里 attacker_builder 是一个字典，其元素是参与方和对应的 attacker_builder_function，通常只需要填充攻击方和对应的 attacker_builder_function。\n",
    "\n",
    "由于本文中的特征攻击算法需要对方 base 模型参数，这里我们通过被攻击方在训练结束时将 base 模型保存到磁盘，攻击方从磁盘对应路径加载模型得到对应 base 模型来实现，因而这里双方都有对应的 attacker_builder。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "2a18d6ae-2fa7-415e-b090-427d3f741333",
   "metadata": {},
   "outputs": [],
   "source": [
    "from secretflow_fl.ml.nn.sl.attacks.fia_torch import (\n",
    "    FeatureInferenceAttack,\n",
    "    # SaveModelCallback,\n",
    ")\n",
    "\n",
    "\n",
    "def create_attacker_builder(\n",
    "    model_save_path, bob_mean, pred_data, pred_label, batch_size, save_model_path\n",
    "):\n",
    "    def attacker_builder():\n",
    "        victim_model_dict = {\n",
    "            'bob': [SLBaseNet, model_save_path],\n",
    "        }\n",
    "        optim_fn = optim_wrapper(optim.Adam, lr=0.0001)\n",
    "        generator_model = TorchModel(\n",
    "            model_fn=Generator,\n",
    "            loss_fn=None,\n",
    "            optim_fn=optim_fn,\n",
    "            metrics=None,\n",
    "        )\n",
    "\n",
    "        data_buil = data_builder(pred_data, pred_label, batch_size)\n",
    "\n",
    "        attacker = FeatureInferenceAttack(\n",
    "            victim_model_dict=victim_model_dict,\n",
    "            base_model_list=['alice', 'bob'],\n",
    "            attack_party='alice',\n",
    "            generator_model_wrapper=generator_model,\n",
    "            data_builder=data_buil,\n",
    "            victim_fea_dim=20,\n",
    "            attacker_fea_dim=28,\n",
    "            enable_mean=True,\n",
    "            enable_var=True,\n",
    "            victim_mean_feature=bob_mean,\n",
    "            save_model_path=save_model_path,\n",
    "        )\n",
    "        return attacker\n",
    "\n",
    "    return attacker_builder\n",
    "\n",
    "\n",
    "# in Algorithm 2 line 9, attacker will inference v_hat so attacker should get the whole federated model(which maybe unrealistic)\n",
    "# victim bob will call this callback_builder to save base model first, then attacker alice loads this victim's model from the same path\n",
    "# def create_victim_callback_builder(model_save_path):\n",
    "#     def builder():\n",
    "#         cb = SaveModelCallback(model_save_path)\n",
    "#         return cb\n",
    "\n",
    "#     return builder\n",
    "\n",
    "\n",
    "batch_size = 64\n",
    "import os\n",
    "import shutil\n",
    "\n",
    "fia_path = './model_saved'\n",
    "if os.path.exists(fia_path):\n",
    "    shutil.rmtree(fia_path)\n",
    "os.mkdir(fia_path)\n",
    "model_save_path = fia_path + '/sl_model_victim'\n",
    "generator_save_path = fia_path + '/generator'\n",
    "\n",
    "# callback_dict = {\n",
    "#     alice: create_attacker_builder(\n",
    "#         model_save_path,\n",
    "#         bob_mean,\n",
    "#         pred_fea,\n",
    "#         pred_label,\n",
    "#         batch_size,\n",
    "#         generator_save_path,\n",
    "#     ),\n",
    "#     bob: create_victim_callback_builder(model_save_path),\n",
    "# }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e1110d7-6f66-4b5b-8f3d-95088fcef5c4",
   "metadata": {},
   "source": [
    "## 开始训练和攻击"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "d80f51bb-70d3-46c2-a5cc-f3fe45777b08",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:root:SL Train Params: {'x': FedNdarray(partitions={PYURuntime(alice): <secretflow.device.device.pyu.PYUObject object at 0x7f937485e4c0>, PYURuntime(bob): <secretflow.device.device.pyu.PYUObject object at 0x7f9374883730>}, partition_way=<PartitionWay.VERTICAL: 'vertical'>), 'y': <secretflow.device.device.pyu.PYUObject object at 0x7f937485ebe0>, 'batch_size': 64, 'epochs': 1, 'verbose': 1, 'callbacks': {PYURuntime(alice): <function create_attacker_builder.<locals>.attacker_builder at 0x7f9297b8bee0>, PYURuntime(bob): <function create_victim_callback_builder.<locals>.builder at 0x7f9374d0fdc0>}, 'validation_data': (FedNdarray(partitions={PYURuntime(alice): <secretflow.device.device.pyu.PYUObject object at 0x7f9374883dc0>, PYURuntime(bob): <secretflow.device.device.pyu.PYUObject object at 0x7f93748835b0>}, partition_way=<PartitionWay.VERTICAL: 'vertical'>), <secretflow.device.device.pyu.PYUObject object at 0x7f9374594190>), 'shuffle': False, 'sample_weight': None, 'validation_freq': 1, 'dp_spent_step_freq': None, 'dataset_builder': None, 'audit_log_params': {}, 'random_seed': 1234, 'audit_log_dir': None, 'self': <secretflow_fl.ml.nn.sl.sl_model.SLModel object at 0x7f9297bcc400>}\n",
      "\u001b[2m\u001b[36m(pid=1843185)\u001b[0m 2023-09-26 19:49:48.399383: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843449)\u001b[0m 2023-09-26 19:49:48.657976: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843185)\u001b[0m 2023-09-26 19:49:49.258363: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843185)\u001b[0m 2023-09-26 19:49:49.258463: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843185)\u001b[0m 2023-09-26 19:49:49.258475: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n",
      "\u001b[2m\u001b[36m(pid=1843449)\u001b[0m 2023-09-26 19:49:49.494645: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843449)\u001b[0m 2023-09-26 19:49:49.494731: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(pid=1843449)\u001b[0m 2023-09-26 19:49:49.494741: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n",
      "  0%|          | 0/5 [00:00<?, ?it/s]\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m /home/ssd2/zhaocaibei/miniconda3/envs/jupyter/lib/python3.8/site-packages/secretflow/ml/nn/sl/attack/torch/feature_inference_attack.py:112: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m   self.victim_mean_feature = torch.from_numpy(victim_mean_feature)\n",
      "\u001b[2m\u001b[36m(_run pid=1835170)\u001b[0m 2023-09-26 19:49:52.135553: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(_run pid=1835170)\u001b[0m 2023-09-26 19:49:52.965082: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(_run pid=1835170)\u001b[0m 2023-09-26 19:49:52.965192: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib:/opt/rh/gcc-toolset-11/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-11/root/usr/lib/dyninst\n",
      "\u001b[2m\u001b[36m(_run pid=1835170)\u001b[0m 2023-09-26 19:49:52.965203: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n",
      "100%|██████████| 5/5 [00:03<00:00,  1.50it/s, epoch: 1/1 -  train_loss:1.714323878288269  train_MulticlassAccuracy:0.7785466909408569  train_MulticlassPrecision:0.7785466909408569  val_val_loss:1.5049391984939575  val_MulticlassAccuracy:1.0  val_MulticlassPrecision:1.0 ]\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 0, loss is 1.1522607803344727\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 1, loss is 0.8157393336296082\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 2, loss is 0.6134978532791138\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 3, loss is 0.48556119203567505\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 4, loss is 0.4039228558540344\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 5, loss is 0.3467817008495331\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 6, loss is 0.30952346324920654\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 7, loss is 0.2818766236305237\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 8, loss is 0.25987595319747925\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 9, loss is 0.24386532604694366\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 10, loss is 0.22900035977363586\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 11, loss is 0.2106342762708664\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 12, loss is 0.20029443502426147\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 13, loss is 0.19823205471038818\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 14, loss is 0.18952125310897827\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 15, loss is 0.1832481324672699\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 16, loss is 0.1828383505344391\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 17, loss is 0.17258648574352264\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 18, loss is 0.17232480645179749\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 19, loss is 0.1667376607656479\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 20, loss is 0.15784838795661926\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 21, loss is 0.15870268642902374\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 22, loss is 0.15539318323135376\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 23, loss is 0.1502450406551361\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 24, loss is 0.15154865384101868\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 25, loss is 0.1471061408519745\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 26, loss is 0.1450311243534088\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 27, loss is 0.14683523774147034\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 28, loss is 0.14344236254692078\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 29, loss is 0.13282963633537292\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 30, loss is 0.13793230056762695\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 31, loss is 0.13422174751758575\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 32, loss is 0.1318027377128601\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 33, loss is 0.12955468893051147\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 34, loss is 0.12937012314796448\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 35, loss is 0.1284986287355423\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 36, loss is 0.1253005415201187\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 37, loss is 0.13130077719688416\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 38, loss is 0.1258171647787094\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 39, loss is 0.12141703069210052\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 40, loss is 0.11995712667703629\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 41, loss is 0.1222689226269722\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 42, loss is 0.12318340688943863\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 43, loss is 0.12048806250095367\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 44, loss is 0.11977540701627731\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 45, loss is 0.11720554530620575\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 46, loss is 0.11837649345397949\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 47, loss is 0.11739760637283325\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 48, loss is 0.11748425662517548\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 49, loss is 0.11602818965911865\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 50, loss is 0.11085663735866547\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 51, loss is 0.11347486078739166\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 52, loss is 0.11071737110614777\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 53, loss is 0.1132960245013237\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 54, loss is 0.11038753390312195\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 55, loss is 0.10890733450651169\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 56, loss is 0.10951762646436691\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 57, loss is 0.10976753383874893\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 58, loss is 0.11292025446891785\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:In epoch 59, loss is 0.1031401976943016\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:Mean generator loss: 0.04493473656475544\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:Mean random guess loss: 0.17182568460702896\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:Mean generator loss Per Feature: [0.00954273 0.02475509 0.03384783 0.02044668 0.03715948 0.00336939\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.03370478 0.03777658 0.07355641 0.02407766 0.25157754 0.01827634\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.02219832 0.0097403  0.00756813 0.01786439 0.02057017 0.0055741\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.00618307 0.24090575]\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m INFO:root:Mean random guess loss Per Feature: [0.14084914 0.15443441 0.15860398 0.15230184 0.19461861 0.12846064\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.18179844 0.16156382 0.21596114 0.20383138 0.33147337 0.13473797\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.13449802 0.10377935 0.11541938 0.16272019 0.14198899 0.14747391\n",
      "\u001b[2m\u001b[36m(PYUSLTorchModel pid=1843185)\u001b[0m  0.12969266 0.34230641]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'train_loss': [array(1.7143239, dtype=float32)],\n",
       " 'train_MulticlassAccuracy': [tensor(0.7785)],\n",
       " 'train_MulticlassPrecision': [tensor(0.7785)],\n",
       " 'val_val_loss': [array(1.5049392, dtype=float32)],\n",
       " 'val_MulticlassAccuracy': [tensor(1.)],\n",
       " 'val_MulticlassPrecision': [tensor(1.)]}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sl_model.fit(\n",
    "    fed_data,\n",
    "    label,\n",
    "    validation_data=(test_fed_data, test_data_label),\n",
    "    epochs=1,\n",
    "    batch_size=batch_size,\n",
    "    shuffle=False,\n",
    "    random_seed=1234,\n",
    "    dataset_builder=None,\n",
    "    # callbacks=callback_dict, # 暂时注释掉，callback完成后恢复 @caibei\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ae6352a-38da-4d13-9357-22d5835fa2a7",
   "metadata": {},
   "source": [
    "## 总结\n",
    "本文通过 UCI Sensorless Drive Diagnosis 数据集上的特征攻击任务来演示了如何通过隐语来使用 FeatureInferenceAttack。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e0b18f1-58b6-4d0f-be81-bc3da09e6a13",
   "metadata": {},
   "source": [
    "您可以：\n",
    "\n",
    "1. 下载并拆分数据集，准备训练、攻击使用的数据\n",
    "2. 定义拆分模型结构及 SL Model\n",
    "3. 定义 attacker_builder，在其中定义攻击需要的 data_builder 和 FeatureInfereceAttacker\n",
    "4. 调用 SL Model 进行训练攻击\n",
    "\n",
    "您可以在自己的数据集上进行尝试，如有任何问题，可以在 github 进行训练即可。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.17"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
