{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 基于DQAS算法的MNIST手写数字二分类问题  \n",
    "完成人: 杨建飞 2024.11.26\n",
    "## 开源实习任务介绍  \n",
    "量子线路的不同设计方式会严重影响到量子变分算法、量子神经网络等应用的效果，受到经典神经网络设计的启发，量子线路的自适应选取有望提高量子算法的效率和精度，本任务旨在构建一套量子线路结构搜索框架，可以自适应地完成量子线路设计并提高结果精度。需要设计一套自适应的程序框架，在给定量子门集合中自动选取最优的组合来实现目标量子算法。  \n",
    "要求基于MindQuantum==0.9框架，实现mnist数据集识别准确度比文献“https://doi.org/10.1016/j.neunet.2023.09.040”更高\n",
    "\n",
    "## DQAS 算法介绍  \n",
    "[DQAS算法[1]](https://arxiv.org/abs/2010.08561)是一种基于可微量子结构搜索的算法,将Ansatz的搜索过程分为了两个目标：结构搜索和参数搜索，并且在算法中进行同时优化，是一种双优化算法。首先需要预定义的是两个部分：**算符池(Operator Pool)**和**共享参数池(Shared Parameter Pool)**。算符池中存储的是预定义量子门，共享参数池中存储的是量子门参数。  \n",
    "### 算符池(Operator Pool)\n",
    "算符池中是预定义的量子线路片段，被称为“算符(Operator)”. 请注意算符可能是参数化的，也可能不是参数化的。在本项目中的预定义算符池成员如下：\n",
    "1. Rx(θ1)Ry(θ2)Rz(θ3)门\n",
    "2. CNOT门  \n",
    "\n",
    "需注意作用在不同的量子比特上可以视为不同的算符，因此当量子比特数目增加时，设计算符池存在困难。在本项目的实践过程中，采用了一种更加具有拓展性的算符池设计方式,被称为[Micro Search](https://proceedings.mlr.press/v202/wu23v/wu23v.pdf)，其核心思想是，只设计一个子线路(Sub Circuit),通过重复这个子线路的结构能够拼成一个完整线路。(仅仅是结构重复，变分参数并不相同)，示意图如下:  \n",
    "![图片alt](./asset/MicroSerach.png)\n",
    "\n",
    "### 共享参数池(Shared Parameter Pool)\n",
    "共享参数池中存储的是量子门参数，这些参数是共享的，也就是每次生成的Ansatz 会直接从共享参数池中的对应位置直接获取一个参数作为变分参数(这当然不是最优参数),绑定好后再进行损失值的测定。最后将会根据 loss 测定带来的梯度，更新对应参数池对应位置的参数。  \n",
    "尽管共享参数池的概念，不完全符合VQA算法找寻最优变分参数的惯例，但这样的设计方案大大降低了算法的运行时间和复杂度。到了后期共享参数池不断地被更新，且候选结构相对固定，便会接近有固定 ansatz 结构的VQA算法。 \n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据预处理部分  \n",
    "在本项目中对 MNIST 手写数字数据集中的\"3\"和\"6\"图片进行了PCA降维到8维, 同时将降维数据映射到[0,π]区间，并进行了归一化处理。  \n",
    "在量子线路中的 Data Encoding部分,采用了PCA数据+角度编码(8量子比特)的方式完成。\n",
    "此外，需要格外关注的一点是,MNIST手写数据集中\"3\"和\"6\"的个数总数为11446, 而DQAS只抽样使用了其中的10%，后续微调过程则在完整的数据集上实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchvision import datasets\n",
    "from data_preprocessing import PCA_data_preprocessing\n",
    "mnist_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=None)\n",
    "#载入预处理、采样 10% 过后的的二分类认为训练和测试数据集\n",
    "X_train, X_test, y_train, y_test = PCA_data_preprocessing(mnist_dataset,8)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "from mindquantum.core.parameterresolver import  PRGenerator\n",
    "import numpy as np\n",
    "from mindquantum.core.gates import RX, RY, RZ,X,I\n",
    "from mindquantum.core.circuit import UN\n",
    "import mindspore as ms\n",
    "import pickle\n",
    "from mindquantum.core.parameterresolver import PRGenerator\n",
    "from mindspore import ops\n",
    "import tensorcircuit as tc\n",
    "import tensorflow as tf\n",
    "pr_pool = PRGenerator('pool')\n",
    "\n",
    "#定义参数化算符池 加入I门填充空余比特 是为了方便后续操作\n",
    "parameterized_circuit= \\\n",
    "[\n",
    " UN(RX(pr_pool.new()),maps_obj=[0])+\\\n",
    " UN(RY(pr_pool.new()),maps_obj=[0])+\\\n",
    " UN(RZ(pr_pool.new()),maps_obj=[0])+I.on(1),\n",
    " UN(RX(pr_pool.new()),maps_obj=[1])+\\\n",
    " UN(RY(pr_pool.new()),maps_obj=[1])+\\\n",
    " UN(RZ(pr_pool.new()),maps_obj=[1])+I.on(0),]\n",
    "\n",
    "#定义非参数化算符池\n",
    "unparameterized_circuit = \\\n",
    "[UN(X,maps_obj=[0],maps_ctrl=[1]),\n",
    " UN(X,maps_obj=[1],maps_ctrl=[0]),\n",
    " ]\n",
    "ansatz_pr = PRGenerator('ansatz')\n",
    "shape_parametized = len(parameterized_circuit)\n",
    "shape_unparameterized = len(unparameterized_circuit)\n",
    "\n",
    "#设定具体的微观搜索中子线路的结构占位符个数\n",
    "num_layer=4\n",
    "shape_nnp = (7,num_layer,shape_parametized,3) #定义共享参数池的尺寸\n",
    "shape_stp = (num_layer,shape_unparameterized+shape_parametized) #定义结构参数的初始尺寸\n",
    "stddev = 0.03\n",
    "np.random.seed(2)\n",
    "nnp = np.random.normal(loc=0.0, scale=stddev, size=shape_nnp) #生成随机分布的共享参数池\n",
    "stp = np.random.normal(loc=0.0, scale=stddev, size=shape_stp) #生成随机分布的结构参数\n",
    "ops_onehot = ops.OneHot(axis=-1) #定义onehot编码算子"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 算符池的具体定义 \n",
    "查看算符池的各个成员"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for each_op in parameterized_circuit:\n",
    "    display(each_op.svg())\n",
    "for each_op in unparameterized_circuit:\n",
    "    display(each_op.svg())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 开始进行DQAS算法的迭代"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from DQAS_tool import  sampling_from_structure,vag_nnp_micro_minipool,best_from_structure,zeroslike_grad_nnp_micro_minipool,nmf_gradient,DQAS_accuracy_custom,Mindspore_ansatz_micro_minipool,nnp_dealwith,TorchOptimizer,covert_ms2torch,LinearDecayLR\n",
    "import torch\n",
    "structure_opt = TorchOptimizer(torch.optim.Adam, learning_rate=0.1)\n",
    "initial_lr = 0.06\n",
    "final_lr = 0.5\n",
    "decay_steps = 75\n",
    "network_opt =   TorchOptimizer(\n",
    "    torch.optim.Adam,\n",
    "    learning_rate=initial_lr,\n",
    "    lr_scheduler=lambda opt: LinearDecayLR(opt, initial_lr, final_lr, decay_steps)\n",
    ")\n",
    "\n",
    "\n",
    "verbose = False\n",
    "# 设置超参数\n",
    "epochs = 100\n",
    "batch_size=100\n",
    "avcost1 = 0\n",
    "ops_onehot = ops.OneHot(axis=-1)\n",
    "batch_loss_history=[] # 记录每个epoch的batch_size损失值\n",
    "structure_distribution_history=[] # 记录每个epoch的结构参数\n",
    "ansatz_params_history=[] # 记录每个epoch的网络参数\n",
    "best_candidates_history=[] # 记录每个epoch的最佳候选\n",
    "acc_history = [] #记录每个epoch的准确率\n",
    "\n",
    "for epoch in range(epochs):  # 更新结构参数的迭代\n",
    "    avcost2 = avcost1\n",
    "    costl = []\n",
    "    tmp = np.stack([sampling_from_structure(stp,num_layer,shape_parametized) for _ in range(batch_size)])\n",
    "    batch_structure = ops_onehot(ms.Tensor(tmp),shape_parametized+shape_unparameterized,ms.Tensor(1),ms.Tensor(0))\n",
    "    loss_value = []\n",
    "    grad_nnps = []\n",
    "    grad_stps = []\n",
    "    \n",
    "    for i in batch_structure:\n",
    "        infd, grad_nnp = vag_nnp_micro_minipool(Structure_params=i,\n",
    "                                                Ansatz_params=nnp,\n",
    "                                                paramerterized_pool=parameterized_circuit,  unparamerterized_pool=unparameterized_circuit,\n",
    "                                                num_layer=num_layer,n_qbits=8)(ms.Tensor(X_train),ms.Tensor(y_train))\n",
    "        \n",
    "        grad_nnp_zeroslike = zeroslike_grad_nnp_micro_minipool(batch_sturcture=i,grad_nnp=grad_nnp[0],shape_parametized=shape_parametized,ansatz_parameters=nnp)\n",
    "        gs = nmf_gradient(structures=stp,oh=i,num_layer=num_layer,size_pool=stp.shape[1])\n",
    "        #print(infd,grad_nnp)\n",
    "        loss_value.append(infd)\n",
    "        grad_nnps.append(ms.Tensor(grad_nnp_zeroslike,dtype=ms.float64))\n",
    "        grad_stps.append(gs)\n",
    "\n",
    "      \n",
    "    infd = ops.stack(loss_value)\n",
    "    gnnp = ops.addn(grad_nnps)\n",
    "    gstp = [(infd[i] - avcost2) * grad_stps[i] for i in range(infd.shape[0])]\n",
    "    gstp_averge = ops.addn(gstp) / infd.shape[0]\n",
    "    avcost1 = sum(infd) / infd.shape[0]\n",
    "     \n",
    "    nnp = network_opt.update(covert_ms2torch(gnnp), torch.from_numpy(nnp))\n",
    "    stp = structure_opt.update(covert_ms2torch(gstp_averge), torch.from_numpy(stp)) \n",
    "    \n",
    "    # nnp = nnp_tf.numpy()\n",
    "    # stp = stp_tf.numpy()\n",
    "\n",
    "    batch_loss_history.append(avcost1)\n",
    "    structure_distribution_history.append(stp)\n",
    "    ansatz_params_history.append(nnp)\n",
    "    #best_candidates_history.append(best_from_structure(cand_preset.asnumpy()))\n",
    "    cand_preset = best_from_structure(stp)\n",
    "    best_candidates_history.append(cand_preset.asnumpy())\n",
    "    \n",
    "\n",
    "    if epoch % 1 == 0 or epoch == epochs - 1:\n",
    "        print(\"----------epoch %s-----------\" % epoch)\n",
    "        print(\n",
    "            \"batched平均损失: \",\n",
    "            avcost1,\n",
    "        )\n",
    "    \n",
    "        if verbose:\n",
    "            print(\n",
    "                \"strcuture parameter: \\n\",\n",
    "                stp,\n",
    "                \"\\n network parameter: \\n\",\n",
    "                nnp,\n",
    "            )\n",
    "        \n",
    "        print(\"最好的候选结构:\",cand_preset)\n",
    "        stp_for_test = ops_onehot(ms.Tensor(cand_preset),shape_parametized+shape_unparameterized,ms.Tensor(1),ms.Tensor(0))\n",
    "\n",
    "        \n",
    "        if cand_preset.min() <shape_parametized:\n",
    "            ansatz_parameters = nnp_dealwith(Structure_params=stp_for_test,Network_params=nnp)\n",
    "            test_ansatz = Mindspore_ansatz_micro_minipool(Structure_p=stp_for_test,\n",
    "                                            parameterized_pool=parameterized_circuit,unparameterized_pool=unparameterized_circuit,\n",
    "                                            num_layer=num_layer,\n",
    "                                            n_qbits=8)\n",
    "            acc = DQAS_accuracy_custom(ansatz=test_ansatz,Network_params=ansatz_parameters,X=X_train,y=y_train,n_qbits=8)\n",
    "            acc_history.append(acc)\n",
    "            print(f'二分类10%训练集准确率 Acc ={acc*100}% ')\n",
    "        \n",
    "        #我想每一轮结束 保存batch_loss_history、structure_distribution_history、ansatz_params_history、best_candidates_history、acc_history\n",
    "        # 保存数据\n",
    "        with open('training_history-minipool-k4.pkl', 'wb') as f:\n",
    "            pickle.dump({\n",
    "                'batch_loss_history': batch_loss_history,\n",
    "                'structure_distribution_history': structure_distribution_history,\n",
    "                'ansatz_params_history': ansatz_params_history,\n",
    "                'best_candidates_history': best_candidates_history,\n",
    "                'acc_history': acc_history\n",
    "            }, f)\n",
    "        \n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
