{
 "nbformat": 4,
 "nbformat_minor": 0,
 "metadata": {
  "colab": {
   "provenance": [],
   "gpuType": "T4"
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3"
  },
  "language_info": {
   "name": "python"
  },
  "accelerator": "GPU"
 },
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# TorchQuantum Qubit Rotation Tutorial\n",
    "\n",
    "> Note: This tutorial was adapted from Pennylane's [Basic tutorial: qubit rotation](https://pennylane.ai/qml/demos/tutorial_qubit_rotation) by Josh Izaac.\n",
    "\n",
    "To see how TorchQuantum allows the easy construction and optimization of quantum functions, let's consider the simple case of qubit rotation.\n",
    "\n",
    "The task at hand is to optimize two rotation gates in order to flip a single qubit from state |0⟩ to state |1⟩.\n",
    "\n"
   ],
   "metadata": {
    "id": "Y6HrDR9HLIgG"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "## The quantum circuit"
   ],
   "metadata": {
    "id": "Y7jxsrq-TcqC"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "In the qubit rotation example, we wish to implement the following quantum circuit:\n",
    "\n",
    "![image.png]()\n",
    "\n",
    "Breaking this down step-by-step, we first start with a qubit in the ground state $|0⟩ = [1\\ 0]^T$, and rotate it around the x-axis by applying the gate\n",
    "\n",
    "$\\begin{split}R_x(\\phi_1) = e^{-i \\phi_1 \\sigma_x /2} =\n",
    "\\begin{bmatrix}\n",
    "\\cos \\frac{\\phi_1}{2} & -i \\sin \\frac{\\phi_1}{2}\n",
    "\\\\\n",
    "-i \\sin \\frac{\\phi_1}{2} & \\cos \\frac{\\phi_1}{2}\n",
    "\\end{bmatrix},\n",
    "\\end{split}$\n",
    "\n",
    "and then around the y-axis via the gate\n",
    "\n",
    "$\\begin{split}R_y(\\phi_2) = e^{-i \\phi_2 \\sigma_y/2} =\n",
    "\\begin{bmatrix} \\cos \\frac{\\phi_2}{2} & - \\sin \\frac{\\phi_2}{2}\n",
    "\\\\\n",
    "\\sin \\frac{\\phi_2}{2} & \\cos \\frac{\\phi_2}{2}\n",
    "\\end{bmatrix}.\\end{split}$\n",
    "\n",
    "After these operations the qubit is now in the state\n",
    "\n",
    "$| \\psi \\rangle = R_y(\\phi_2) R_x(\\phi_1) | 0 \\rangle.$\n",
    "\n",
    "Finally, we measure the expectation value $⟨ψ∣σ_z∣ψ⟩$ of the Pauli-Z operator\n",
    "\n",
    "Using the above to calculate the exact expectation value, we find that\n",
    "\n",
    "$\\begin{split}\\sigma_z =\n",
    "\\begin{bmatrix} 1 & 0\n",
    "\\\\\n",
    "0 & -1\n",
    "\\end{bmatrix}.\\end{split}$\n",
    "\n",
    "Depending on the circuit parameters $ϕ_1$ and $ϕ_2$, the output expectation lies between 1 (if $|ψ⟩ = |0⟩) and -1 (if |ψ⟩ = |1⟩).\n",
    "\n",
    "$\\langle \\psi \\mid \\sigma_z \\mid \\psi \\rangle\n",
    "      = \\langle 0 \\mid R_x(\\phi_1)^\\dagger R_y(\\phi_2)^\\dagger \\sigma_z R_y(\\phi_2) R_x(\\phi_1) \\mid 0 \\rangle\n",
    "      = \\cos(\\phi_1)\\cos(\\phi_2).$\n",
    "\n",
    "Let's see how we can easily implement and optimize this circuit using TorchQuantum.\n",
    "\n"
   ],
   "metadata": {
    "id": "oFuPFpXhTFAR"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Importing TorchQuantum"
   ],
   "metadata": {
    "id": "5sge9dfJTer6"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "The first thing we need to do is install and import TorchQuantum. To utilize all of TorchQuantum's features, install it from source."
   ],
   "metadata": {
    "id": "4qF2oH1MTmHb"
   }
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "id": "omF7GkuHKaPp",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "2c200ab7-f939-4193-d872-01c0f98b3ee6"
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Cloning into 'torchquantum'...\n",
      "remote: Enumerating objects: 13551, done.\u001b[K\n",
      "remote: Counting objects: 100% (1822/1822), done.\u001b[K\n",
      "remote: Compressing objects: 100% (758/758), done.\u001b[K\n",
      "remote: Total 13551 (delta 1085), reused 1640 (delta 980), pack-reused 11729\u001b[K\n",
      "Receiving objects: 100% (13551/13551), 104.07 MiB | 21.17 MiB/s, done.\n",
      "Resolving deltas: 100% (7442/7442), done.\n",
      "Obtaining file:///content/torchquantum\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: numpy>=1.19.2 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (1.22.4)\n",
      "Requirement already satisfied: torchvision>=0.9.0.dev20210130 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (0.15.2+cu118)\n",
      "Requirement already satisfied: tqdm>=4.56.0 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (4.65.0)\n",
      "Requirement already satisfied: setuptools>=52.0.0 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (67.7.2)\n",
      "Requirement already satisfied: torch>=1.8.0 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (2.0.1+cu118)\n",
      "Collecting torchdiffeq>=0.2.3 (from torchquantum==0.1.7)\n",
      "  Downloading torchdiffeq-0.2.3-py3-none-any.whl (31 kB)\n",
      "Collecting torchpack>=0.3.0 (from torchquantum==0.1.7)\n",
      "  Downloading torchpack-0.3.1-py3-none-any.whl (34 kB)\n",
      "Collecting qiskit==0.38.0 (from torchquantum==0.1.7)\n",
      "  Downloading qiskit-0.38.0.tar.gz (13 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: matplotlib>=3.3.2 in /usr/local/lib/python3.10/dist-packages (from torchquantum==0.1.7) (3.7.1)\n",
      "Collecting pathos>=0.2.7 (from torchquantum==0.1.7)\n",
      "  Downloading pathos-0.3.0-py3-none-any.whl (79 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m79.8/79.8 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting pylatexenc>=2.10 (from torchquantum==0.1.7)\n",
      "  Downloading pylatexenc-2.10.tar.gz (162 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m162.6/162.6 kB\u001b[0m \u001b[31m9.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Collecting dill==0.3.4 (from torchquantum==0.1.7)\n",
      "  Downloading dill-0.3.4-py2.py3-none-any.whl (86 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.9/86.9 kB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting qiskit-terra==0.21.2 (from qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading qiskit_terra-0.21.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.7 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.7/6.7 MB\u001b[0m \u001b[31m13.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting qiskit-aer==0.11.0 (from qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading qiskit_aer-0.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.2 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m19.2/19.2 MB\u001b[0m \u001b[31m64.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting qiskit-ibmq-provider==0.19.2 (from qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading qiskit_ibmq_provider-0.19.2-py3-none-any.whl (240 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m240.4/240.4 kB\u001b[0m \u001b[31m23.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.10/dist-packages (from qiskit-aer==0.11.0->qiskit==0.38.0->torchquantum==0.1.7) (1.10.1)\n",
      "Requirement already satisfied: requests>=2.19 in /usr/local/lib/python3.10/dist-packages (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (2.27.1)\n",
      "Collecting requests-ntlm>=1.1.0 (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading requests_ntlm-1.2.0-py3-none-any.whl (6.0 kB)\n",
      "Requirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (1.26.16)\n",
      "Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.10/dist-packages (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (2.8.2)\n",
      "Requirement already satisfied: websocket-client>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (1.6.1)\n",
      "Collecting websockets>=10.0 (from qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading websockets-11.0.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (129 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m129.9/129.9 kB\u001b[0m \u001b[31m13.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting retworkx>=0.11.0 (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading retworkx-0.13.0-py3-none-any.whl (10 kB)\n",
      "Collecting ply>=3.10 (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading ply-3.11-py2.py3-none-any.whl (49 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.6/49.6 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: psutil>=5 in /usr/local/lib/python3.10/dist-packages (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7) (5.9.5)\n",
      "Requirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.10/dist-packages (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7) (1.11.1)\n",
      "Collecting stevedore>=3.0.0 (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading stevedore-5.1.0-py3-none-any.whl (49 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.6/49.6 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting tweedledum<2.0,>=1.1 (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading tweedledum-1.1.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (929 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m929.7/929.7 kB\u001b[0m \u001b[31m55.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting symengine>=0.9 (from qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading symengine-0.10.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (37.4 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m37.4/37.4 MB\u001b[0m \u001b[31m14.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (1.1.0)\n",
      "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (0.11.0)\n",
      "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (4.40.0)\n",
      "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (1.4.4)\n",
      "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (23.1)\n",
      "Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (8.4.0)\n",
      "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.2->torchquantum==0.1.7) (3.1.0)\n",
      "Collecting ppft>=1.7.6.6 (from pathos>=0.2.7->torchquantum==0.1.7)\n",
      "  Downloading ppft-1.7.6.6-py3-none-any.whl (52 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m52.8/52.8 kB\u001b[0m \u001b[31m5.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hINFO: pip is looking at multiple versions of pathos to determine which version is compatible with other requirements. This could take a while.\n",
      "Collecting pathos>=0.2.7 (from torchquantum==0.1.7)\n",
      "  Downloading pathos-0.2.9-py3-none-any.whl (76 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.9/76.9 kB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Downloading pathos-0.2.8-py2.py3-none-any.whl (81 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.7/81.7 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting multiprocess>=0.70.12 (from pathos>=0.2.7->torchquantum==0.1.7)\n",
      "  Downloading multiprocess-0.70.14-py310-none-any.whl (134 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.3/134.3 kB\u001b[0m \u001b[31m14.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting pox>=0.3.0 (from pathos>=0.2.7->torchquantum==0.1.7)\n",
      "  Downloading pox-0.3.2-py3-none-any.whl (29 kB)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.8.0->torchquantum==0.1.7) (3.12.2)\n",
      "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.8.0->torchquantum==0.1.7) (4.7.1)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.8.0->torchquantum==0.1.7) (3.1)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.8.0->torchquantum==0.1.7) (3.1.2)\n",
      "Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.8.0->torchquantum==0.1.7) (2.0.0)\n",
      "Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.8.0->torchquantum==0.1.7) (3.25.2)\n",
      "Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.8.0->torchquantum==0.1.7) (16.0.6)\n",
      "Requirement already satisfied: h5py in /usr/local/lib/python3.10/dist-packages (from torchpack>=0.3.0->torchquantum==0.1.7) (3.8.0)\n",
      "Collecting loguru (from torchpack>=0.3.0->torchquantum==0.1.7)\n",
      "  Downloading loguru-0.7.0-py3-none-any.whl (59 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m60.0/60.0 kB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting multimethod (from torchpack>=0.3.0->torchquantum==0.1.7)\n",
      "  Downloading multimethod-1.9.1-py3-none-any.whl (10 kB)\n",
      "Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from torchpack>=0.3.0->torchquantum==0.1.7) (6.0)\n",
      "Requirement already satisfied: tensorboard in /usr/local/lib/python3.10/dist-packages (from torchpack>=0.3.0->torchquantum==0.1.7) (2.12.3)\n",
      "Collecting tensorpack (from torchpack>=0.3.0->torchquantum==0.1.7)\n",
      "  Downloading tensorpack-0.11-py2.py3-none-any.whl (296 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m296.3/296.3 kB\u001b[0m \u001b[31m27.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: toml in /usr/local/lib/python3.10/dist-packages (from torchpack>=0.3.0->torchquantum==0.1.7) (0.10.2)\n",
      "INFO: pip is looking at multiple versions of multiprocess to determine which version is compatible with other requirements. This could take a while.\n",
      "Collecting multiprocess>=0.70.12 (from pathos>=0.2.7->torchquantum==0.1.7)\n",
      "  Downloading multiprocess-0.70.13-py310-none-any.whl (133 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m133.1/133.1 kB\u001b[0m \u001b[31m10.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Downloading multiprocess-0.70.12.2-py39-none-any.whl (128 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m128.7/128.7 kB\u001b[0m \u001b[31m15.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.0->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (1.16.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (2023.5.7)\n",
      "Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (2.0.12)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (3.4)\n",
      "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy>=1.3->qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7) (1.3.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.8.0->torchquantum==0.1.7) (2.1.3)\n",
      "Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (1.4.0)\n",
      "Requirement already satisfied: grpcio>=1.48.2 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (1.56.0)\n",
      "Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (2.17.3)\n",
      "Requirement already satisfied: google-auth-oauthlib<1.1,>=0.5 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (1.0.0)\n",
      "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (3.4.3)\n",
      "Requirement already satisfied: protobuf>=3.19.6 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (3.20.3)\n",
      "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (0.7.1)\n",
      "Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (2.3.6)\n",
      "Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.10/dist-packages (from tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (0.40.0)\n",
      "Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.10/dist-packages (from tensorpack->torchpack>=0.3.0->torchquantum==0.1.7) (2.3.0)\n",
      "Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.10/dist-packages (from tensorpack->torchpack>=0.3.0->torchquantum==0.1.7) (0.8.10)\n",
      "Requirement already satisfied: msgpack>=0.5.2 in /usr/local/lib/python3.10/dist-packages (from tensorpack->torchpack>=0.3.0->torchquantum==0.1.7) (1.0.5)\n",
      "Collecting msgpack-numpy>=0.4.4.2 (from tensorpack->torchpack>=0.3.0->torchquantum==0.1.7)\n",
      "  Downloading msgpack_numpy-0.4.8-py2.py3-none-any.whl (6.9 kB)\n",
      "Requirement already satisfied: pyzmq>=16 in /usr/local/lib/python3.10/dist-packages (from tensorpack->torchpack>=0.3.0->torchquantum==0.1.7) (23.2.1)\n",
      "Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (5.3.1)\n",
      "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (0.3.0)\n",
      "Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (4.9)\n",
      "Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from google-auth-oauthlib<1.1,>=0.5->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (1.3.1)\n",
      "Collecting cryptography>=1.3 (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading cryptography-41.0.2-cp37-abi3-manylinux_2_28_x86_64.whl (4.3 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.3/4.3 MB\u001b[0m \u001b[31m79.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting pyspnego>=0.1.6 (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading pyspnego-0.9.1-py3-none-any.whl (132 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m132.9/132.9 kB\u001b[0m \u001b[31m12.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting rustworkx==0.13.0 (from retworkx>=0.11.0->qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading rustworkx-0.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.9 MB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.9/1.9 MB\u001b[0m \u001b[31m62.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hCollecting pbr!=2.1.0,>=2.0.0 (from stevedore>=3.0.0->qiskit-terra==0.21.2->qiskit==0.38.0->torchquantum==0.1.7)\n",
      "  Downloading pbr-5.11.1-py2.py3-none-any.whl (112 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m112.7/112.7 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (1.15.1)\n",
      "Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (0.5.0)\n",
      "Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard->torchpack>=0.3.0->torchquantum==0.1.7) (3.2.2)\n",
      "Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.2->qiskit==0.38.0->torchquantum==0.1.7) (2.21)\n",
      "Building wheels for collected packages: qiskit, pylatexenc\n",
      "  Building wheel for qiskit (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for qiskit: filename=qiskit-0.38.0-py3-none-any.whl size=12128 sha256=7a54933fa9c2e1b1caffdc6129aa17723a1f8a19655b68eb148d3b916a542664\n",
      "  Stored in directory: /root/.cache/pip/wheels/9c/b0/59/d6281e20610c76a5f88c9b931c6b338410f70b4ba6561453bc\n",
      "  Building wheel for pylatexenc (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for pylatexenc: filename=pylatexenc-2.10-py3-none-any.whl size=136820 sha256=087f5465344ad90f93c062a3e9d3224bf3afdb393350b74383207ecbe6a0509b\n",
      "  Stored in directory: /root/.cache/pip/wheels/d3/31/8b/e09b0386afd80cfc556c00408c9aeea5c35c4d484a9c762fd5\n",
      "Successfully built qiskit pylatexenc\n",
      "Installing collected packages: pylatexenc, ply, websockets, tweedledum, symengine, rustworkx, ppft, pox, pbr, multimethod, msgpack-numpy, loguru, dill, tensorpack, stevedore, retworkx, multiprocess, cryptography, qiskit-terra, pyspnego, pathos, requests-ntlm, qiskit-aer, qiskit-ibmq-provider, qiskit, torchpack, torchdiffeq, torchquantum\n",
      "  Running setup.py develop for torchquantum\n",
      "Successfully installed cryptography-41.0.2 dill-0.3.4 loguru-0.7.0 msgpack-numpy-0.4.8 multimethod-1.9.1 multiprocess-0.70.12.2 pathos-0.2.8 pbr-5.11.1 ply-3.11 pox-0.3.2 ppft-1.7.6.6 pylatexenc-2.10 pyspnego-0.9.1 qiskit-0.38.0 qiskit-aer-0.11.0 qiskit-ibmq-provider-0.19.2 qiskit-terra-0.21.2 requests-ntlm-1.2.0 retworkx-0.13.0 rustworkx-0.13.0 stevedore-5.1.0 symengine-0.10.0 tensorpack-0.11 torchdiffeq-0.2.3 torchpack-0.3.1 torchquantum-0.1.7 tweedledum-1.1.1 websockets-11.0.3\n"
     ]
    }
   ],
   "source": [
    "!git clone https://github.com/mit-han-lab/torchquantum.git\n",
    "!cd torchquantum && pip install --editable ."
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "> **Note: To be able to install TorchQuantum on Colab, you must restart your runtime before continuing!**\n",
    "\n",
    "After installing from source (and restarting if using Colab!), you can import TorchQuantum."
   ],
   "metadata": {
    "id": "Ckw9S9C0TzuH"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "import torchquantum as tq"
   ],
   "metadata": {
    "id": "vhmIuM9Wc70Z"
   },
   "execution_count": 1,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Creating a device"
   ],
   "metadata": {
    "id": "PxW4zls2Y3QM"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Before we can construct our quantum node, we need to initialize a device.\n",
    "\n",
    "> **Definition**\n",
    ">\n",
    "> Any computational object that can apply quantum operations and return a measurement value is called a quantum **device**.\n",
    "\n",
    "> *Devices are loaded in PennyLane via the class [QuantumDevice()](https://github.com/mit-han-lab/torchquantum/blob/main/torchquantum/devices.py#L13)*\n"
   ],
   "metadata": {
    "id": "Y08Q6dMKY6HC"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "For this tutorial, we are using the qubit model, so let's initialize the 'default' device provided by TorchQuantum."
   ],
   "metadata": {
    "id": "0bgRmzQLeOtt"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "qdev = tq.QuantumDevice(\n",
    "    n_wires=1, device_name=\"default\", bsz=1, device=\"cuda\", record_op=True\n",
    ")"
   ],
   "metadata": {
    "id": "NUrCxUQvc_3i"
   },
   "execution_count": 4,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "For all devices, [QuantumDevice()](https://github.com/mit-han-lab/torchquantum/blob/main/torchquantum/devices.py#L13) accepts the following arguments:\n",
    "\n",
    "* n_wires: number of qubits to initialize the device with\n",
    "* device_name: name of the quantum device to be loaded\n",
    "* bsz: batch size of the quantum state\n",
    "* device: which classical computing device to use, 'cpu' or 'cuda' (similar to the device option in PyTorch)\n",
    "* record_op: whether to record the operations on the quantum device and then they can be used to construct a static computation graph\n",
    "\n",
    "Here, as we only require a single qubit for this example, we set wires=1."
   ],
   "metadata": {
    "id": "uJOZRR--dQ0n"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Constructing the Circuit"
   ],
   "metadata": {
    "id": "n2bS-rw1em0a"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Now that we have initialized our device, we can begin to construct the circuit. In TorchQuantum, there are multiple ways to construct a circuit, and we can explore a few of them."
   ],
   "metadata": {
    "id": "9W-rBf2CfCQd"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# specify parameters\n",
    "params = [0.54, 0.12]\n",
    "\n",
    "# create circuit\n",
    "qdev.rx(params=params[0], wires=0)\n",
    "qdev.ry(params=params[1], wires=0)"
   ],
   "metadata": {
    "id": "qcmWA-o4hBqa"
   },
   "execution_count": 5,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "This method calls the gates directly from the QuantumDevice. For the rotations, we can specify which wire it belongs to (zero-indexed) and a parameter theta for the amount of rotation. However, the rotation gates also have other parameters.\n",
    "\n",
    "* wires: which qibits the gate is applied to\n",
    "* theta: the amount of rotation\n",
    "* n_wires: number of qubits the gate is applied to\n",
    "* static: whether use static mode computation\n",
    "* parent_graph: Parent QuantumGraph of current operation\n",
    "* inverse: whether inverse the gate\n",
    "* comp_method: option to use 'bmm' or 'einsum' method to perform matrix vector multiplication"
   ],
   "metadata": {
    "id": "RQhCOnNAhm7q"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "To get the following expected value, we can use two different functions from torchquantum's measurement module."
   ],
   "metadata": {
    "id": "zY5lSe3Nl-78"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "from torchquantum.measurement import expval_joint_analytical, expval_joint_sampling"
   ],
   "metadata": {
    "id": "3IHmc_ILirVI"
   },
   "execution_count": 6,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "* `expval_joint_analytical` will compute the expectation value of a joint observable in analytical way, assuming the statevector is available. This can only be run on a classical simulator, not real quantum hardware.\n",
    "\n",
    "* `expval_joint_analytical` will compute the expectation value of a joint observable from sampling the measurement bistring. This can be run on both a classical simulation and real quantum hardware. Since this is sampling the measurements, it requires a parameters for the number of shots, `n_shots`.\n",
    "\n"
   ],
   "metadata": {
    "id": "h_05PJxAjIMk"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "exp_a = expval_joint_analytical(qdev, \"Z\")\n",
    "exp_s = expval_joint_sampling(qdev, \"Z\", n_shots=1024)\n",
    "\n",
    "print(exp_a, exp_s)"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "3TIwrhn1kD-a",
    "outputId": "0af08a9b-5c1c-475b-9845-3aa2331ed59e"
   },
   "execution_count": 7,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([0.8515]) tensor([0.8184])\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "The two numbers are about the same, and if we increase the number of shots for the joint sampling, its expected value should approach the same value as the analytical."
   ],
   "metadata": {
    "id": "9rUMiTshkuYk"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Calculating quantum gradients"
   ],
   "metadata": {
    "id": "WyHxuz4_l0lB"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "From the expected values output, notice that the analytical expected value has an automatically-calculated gradient which can be used when constructing quantum machine learning models. This is because TorchQuantum automatically calculates the gradients. Let's find the gradient of each individual gate.\n",
    "\n",
    "To do so, we can create the circuit slightly differently, saving each operation as a variable then adding it to the circuit. We can then once again get the expected value with `expval_joint_analytical`."
   ],
   "metadata": {
    "id": "GNlFHcRDnqVl"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "qdev = tq.QuantumDevice(n_wires=1)\n",
    "\n",
    "op1 = tq.RX(has_params=True, trainable=True, init_params=0.54)\n",
    "op1(qdev, wires=0)\n",
    "\n",
    "op2 = tq.RY(has_params=True, trainable=True, init_params=0.12)\n",
    "op2(qdev, wires=0)\n",
    "\n",
    "\n",
    "expval = expval_joint_analytical(qdev, \"Z\")"
   ],
   "metadata": {
    "id": "m_n2ROPNoFzn"
   },
   "execution_count": 8,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "We can then call `.backward()` on the expected value, just like in PyTorch. Afterwards, we can see the gradient of each operation under the `params` option."
   ],
   "metadata": {
    "id": "znstxaD3pFdK"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "expval[0].backward()\n",
    "\n",
    "# calculate the gradients for each operation!\n",
    "print(op1.params.grad, op2.params.grad)"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "d6ehHkuSo5Oq",
    "outputId": "d3610b9a-48b2-4797-c3c9-818db8ce0687"
   },
   "execution_count": 9,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([[-0.5104]]) tensor([[-0.1027]])\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Optimization"
   ],
   "metadata": {
    "id": "R8PqqWa5pbzU"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Next, let's make use of PyTorch's optimizers to optimize the two circuit parameters $\\phi_1$ and $\\phi_2$ such that the qubit, originally in state |0⟩, is rotated to be in state |1⟩. This is equivalent to measuring a Pauli-Z expectation value of -1, since the state |1⟩ is an eigenvector of the Pauli-Z matrix with eigenvalue λ=−1."
   ],
   "metadata": {
    "id": "q4I7DC2Uphzs"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "To construct this circuit, we can use a class similar to a PyTorch module! We can begin by importing torch."
   ],
   "metadata": {
    "id": "G3h9LjzJqf0k"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "import torch"
   ],
   "metadata": {
    "id": "X3LeotDeqoIh"
   },
   "execution_count": 38,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "We can next create the class extending the PyTorch module and add our gates in a similar fashion as the previous steps."
   ],
   "metadata": {
    "id": "LbN7Fo67qpGI"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "import torchquantum as tq\n",
    "import torchquantum.functional as tqf\n",
    "\n",
    "\n",
    "class OptimizationModel(torch.nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.rx0 = tq.RX(has_params=True, trainable=True, init_params=0.011)\n",
    "        self.ry0 = tq.RY(has_params=True, trainable=True, init_params=0.012)\n",
    "\n",
    "    def forward(self):\n",
    "        # create a quantum device to run the gates\n",
    "        qdev = tq.QuantumDevice(n_wires=1)\n",
    "\n",
    "        # add some trainable gates (need to instantiate ahead of time)\n",
    "        self.rx0(qdev, wires=0)\n",
    "        self.ry0(qdev, wires=0)\n",
    "\n",
    "        return expval_joint_analytical(qdev, \"Z\")"
   ],
   "metadata": {
    "id": "gxve5-2SpdDA"
   },
   "execution_count": 39,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "To optimize the rotation, we need to define a cost function. By minimizing the cost function, the optimizer will determine the values of the circuit parameters that produce the desired outcome.\n",
    "\n",
    "In this case, our desired outcome is a Pauli-Z expectation value of −1. Since we know that the Pauli-Z expectation is bound between [−1, 1], we can define our cost directly as the output of the circuit.\n",
    "\n",
    "Similar to PyTorch, we can create a train function to compute the gradients of the loss function and have the optimizer perform an optimization step."
   ],
   "metadata": {
    "id": "Ifi5cH_eq_zW"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "def train(model, device, optimizer):\n",
    "    targets = 0\n",
    "\n",
    "    outputs = model()\n",
    "    loss = outputs\n",
    "    optimizer.zero_grad()\n",
    "    loss.backward()\n",
    "    optimizer.step()\n",
    "\n",
    "    return loss.item()"
   ],
   "metadata": {
    "id": "H5xxXrWUrAO3"
   },
   "execution_count": 53,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "Finally, we can run the model. We can import PyTorch's gradient descent module and use it to optimize our model."
   ],
   "metadata": {
    "id": "dn0131aKrjQ8"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "def main():\n",
    "    seed = 0\n",
    "    torch.manual_seed(seed)\n",
    "\n",
    "    use_cuda = torch.cuda.is_available()\n",
    "    device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n",
    "\n",
    "    model = OptimizationModel()\n",
    "    n_epochs = 200\n",
    "    optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n",
    "\n",
    "    for epoch in range(1, n_epochs + 1):\n",
    "        # train\n",
    "        loss = train(model, device, optimizer)\n",
    "        output = (model.rx0.params[0].item(), model.ry0.params[0].item())\n",
    "        print(f\"Epoch {epoch}: {output}\")\n",
    "\n",
    "        if epoch % 10 == 0:\n",
    "            print(f\"Loss after step {epoch}: {loss}\")"
   ],
   "metadata": {
    "id": "4WJ7yL5SrjkA"
   },
   "execution_count": 54,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "Finally, we can call the main function and run the entire sequence!"
   ],
   "metadata": {
    "id": "eY5PvCqhr1ZF"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "main()"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "_hCBPtMvr4wB",
    "outputId": "2091f42b-7263-4e5d-d2f7-eab8b07bbe7f"
   },
   "execution_count": 55,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Epoch 1: (0.012099898420274258, 0.013199898414313793)\n",
      "Epoch 2: (0.013309753499925137, 0.014519752934575081)\n",
      "Epoch 3: (0.014640549197793007, 0.015971548855304718)\n",
      "Epoch 4: (0.01610436476767063, 0.017568465322256088)\n",
      "Epoch 5: (0.01771448366343975, 0.01932499371469021)\n",
      "Epoch 6: (0.019485509023070335, 0.02125706896185875)\n",
      "Epoch 7: (0.021433496847748756, 0.023382212966680527)\n",
      "Epoch 8: (0.023576095700263977, 0.02571968361735344)\n",
      "Epoch 9: (0.025932706892490387, 0.028290653601288795)\n",
      "Epoch 10: (0.028524650260806084, 0.03111839108169079)\n",
      "Loss after step 10: 0.9992638230323792\n",
      "Epoch 11: (0.031375348567962646, 0.03422846272587776)\n",
      "Epoch 12: (0.0345105305314064, 0.03764895722270012)\n",
      "Epoch 13: (0.037958454340696335, 0.041410721838474274)\n",
      "Epoch 14: (0.04175013676285744, 0.04554762691259384)\n",
      "Epoch 15: (0.04591960832476616, 0.050096847116947174)\n",
      "Epoch 16: (0.05050419643521309, 0.055099159479141235)\n",
      "Epoch 17: (0.05554480850696564, 0.06059926748275757)\n",
      "Epoch 18: (0.06108624115586281, 0.06664614379405975)\n",
      "Epoch 19: (0.06717751175165176, 0.07329340279102325)\n",
      "Epoch 20: (0.07387219369411469, 0.08059966564178467)\n",
      "Loss after step 20: 0.9950657486915588\n",
      "Epoch 21: (0.08122873306274414, 0.08862894773483276)\n",
      "Epoch 22: (0.08931083232164383, 0.09745106101036072)\n",
      "Epoch 23: (0.09818772971630096, 0.10714197158813477)\n",
      "Epoch 24: (0.10793451964855194, 0.11778417229652405)\n",
      "Epoch 25: (0.11863239109516144, 0.12946699559688568)\n",
      "Epoch 26: (0.13036876916885376, 0.14228680729866028)\n",
      "Epoch 27: (0.14323736727237701, 0.15634718537330627)\n",
      "Epoch 28: (0.15733805298805237, 0.17175881564617157)\n",
      "Epoch 29: (0.172776460647583, 0.18863925337791443)\n",
      "Epoch 30: (0.18966329097747803, 0.20711229741573334)\n",
      "Loss after step 30: 0.9676356315612793\n",
      "Epoch 31: (0.20811320841312408, 0.2273070216178894)\n",
      "Epoch 32: (0.2282431572675705, 0.2493562251329422)\n",
      "Epoch 33: (0.2501700222492218, 0.27339422702789307)\n",
      "Epoch 34: (0.2740074098110199, 0.29955384135246277)\n",
      "Epoch 35: (0.2998615801334381, 0.32796236872673035)\n",
      "Epoch 36: (0.32782599329948425, 0.35873648524284363)\n",
      "Epoch 37: (0.3579748272895813, 0.39197587966918945)\n",
      "Epoch 38: (0.3903552293777466, 0.427755743265152)\n",
      "Epoch 39: (0.42497843503952026, 0.46611812710762024)\n",
      "Epoch 40: (0.4618101119995117, 0.5070626139640808)\n",
      "Loss after step 40: 0.8138567805290222\n",
      "Epoch 41: (0.5007606744766235, 0.5505368709564209)\n",
      "Epoch 42: (0.5416762828826904, 0.5964280366897583)\n",
      "Epoch 43: (0.5843320488929749, 0.6445562839508057)\n",
      "Epoch 44: (0.6284285187721252, 0.6946715116500854)\n",
      "Epoch 45: (0.6735928058624268, 0.7464552521705627)\n",
      "Epoch 46: (0.7193858623504639, 0.7995281219482422)\n",
      "Epoch 47: (0.7653157711029053, 0.8534636497497559)\n",
      "Epoch 48: (0.8108565211296082, 0.9078077673912048)\n",
      "Epoch 49: (0.8554709553718567, 0.9621021151542664)\n",
      "Epoch 50: (0.8986347317695618, 1.0159088373184204)\n",
      "Loss after step 50: 0.3750203549861908\n",
      "Epoch 51: (0.9398593902587891, 1.0688340663909912)\n",
      "Epoch 52: (0.9787107706069946, 1.1205471754074097)\n",
      "Epoch 53: (1.0148218870162964, 1.1707944869995117)\n",
      "Epoch 54: (1.0478986501693726, 1.2194054126739502)\n",
      "Epoch 55: (1.0777196884155273, 1.2662931680679321)\n",
      "Epoch 56: (1.1041301488876343, 1.311449408531189)\n",
      "Epoch 57: (1.127032995223999, 1.354935884475708)\n",
      "Epoch 58: (1.1463772058486938, 1.3968735933303833)\n",
      "Epoch 59: (1.1621465682983398, 1.4374314546585083)\n",
      "Epoch 60: (1.1743487119674683, 1.4768157005310059)\n",
      "Loss after step 60: 0.052838265895843506\n",
      "Epoch 61: (1.1830050945281982, 1.5152597427368164)\n",
      "Epoch 62: (1.1881437301635742, 1.553015947341919)\n",
      "Epoch 63: (1.1897931098937988, 1.590348243713379)\n",
      "Epoch 64: (1.1879782676696777, 1.6275262832641602)\n",
      "Epoch 65: (1.1827187538146973, 1.6648198366165161)\n",
      "Epoch 66: (1.1740283966064453, 1.702493667602539)\n",
      "Epoch 67: (1.1619168519973755, 1.7408030033111572)\n",
      "Epoch 68: (1.146392583847046, 1.7799879312515259)\n",
      "Epoch 69: (1.1274679899215698, 1.820267915725708)\n",
      "Epoch 70: (1.1051654815673828, 1.8618348836898804)\n",
      "Loss after step 70: -0.10590392351150513\n",
      "Epoch 71: (1.0795255899429321, 1.9048453569412231)\n",
      "Epoch 72: (1.0506160259246826, 1.9494123458862305)\n",
      "Epoch 73: (1.018541693687439, 1.9955958127975464)\n",
      "Epoch 74: (0.9834545850753784, 2.043394088745117)\n",
      "Epoch 75: (0.9455628991127014, 2.0927350521087646)\n",
      "Epoch 76: (0.9051381945610046, 2.1434707641601562)\n",
      "Epoch 77: (0.8625186085700989, 2.1953752040863037)\n",
      "Epoch 78: (0.8181073665618896, 2.2481465339660645)\n",
      "Epoch 79: (0.7723652124404907, 2.30141544342041)\n",
      "Epoch 80: (0.7257967591285706, 2.354759931564331)\n",
      "Loss after step 80: -0.4779837727546692\n",
      "Epoch 81: (0.6789312362670898, 2.4077253341674805)\n",
      "Epoch 82: (0.6322994232177734, 2.459847927093506)\n",
      "Epoch 83: (0.5864096879959106, 2.5106801986694336)\n",
      "Epoch 84: (0.5417252779006958, 2.5598134994506836)\n",
      "Epoch 85: (0.4986463487148285, 2.6068966388702393)\n",
      "Epoch 86: (0.4574976861476898, 2.6516494750976562)\n",
      "Epoch 87: (0.4185234606266022, 2.6938676834106445)\n",
      "Epoch 88: (0.38188809156417847, 2.7334227561950684)\n",
      "Epoch 89: (0.34768232703208923, 2.770256519317627)\n",
      "Epoch 90: (0.31593260169029236, 2.8043713569641113)\n",
      "Loss after step 90: -0.876086413860321\n",
      "Epoch 91: (0.28661224246025085, 2.835820436477661)\n",
      "Epoch 92: (0.25965315103530884, 2.8646953105926514)\n",
      "Epoch 93: (0.23495660722255707, 2.891116142272949)\n",
      "Epoch 94: (0.21240299940109253, 2.915221691131592)\n",
      "Epoch 95: (0.1918598860502243, 2.937161445617676)\n",
      "Epoch 96: (0.1731884628534317, 2.957089900970459)\n",
      "Epoch 97: (0.156248539686203, 2.97516131401062)\n",
      "Epoch 98: (0.14090220630168915, 2.991525888442993)\n",
      "Epoch 99: (0.12701639533042908, 3.0063281059265137)\n",
      "Epoch 100: (0.11446458846330643, 3.019704818725586)\n",
      "Loss after step 100: -0.9828835129737854\n",
      "Epoch 101: (0.1031278446316719, 3.0317838191986084)\n",
      "Epoch 102: (0.09289533644914627, 3.042684316635132)\n",
      "Epoch 103: (0.08366449177265167, 3.052516460418701)\n",
      "Epoch 104: (0.07534093409776688, 3.0613811016082764)\n",
      "Epoch 105: (0.0678381696343422, 3.069370985031128)\n",
      "Epoch 106: (0.06107722595334053, 3.0765702724456787)\n",
      "Epoch 107: (0.054986197501420975, 3.0830557346343994)\n",
      "Epoch 108: (0.0494997613132, 3.088897228240967)\n",
      "Epoch 109: (0.04455867409706116, 3.0941579341888428)\n",
      "Epoch 110: (0.040109291672706604, 3.0988948345184326)\n",
      "Loss after step 110: -0.997883677482605\n",
      "Epoch 111: (0.03610309213399887, 3.1031599044799805)\n",
      "Epoch 112: (0.03249623253941536, 3.106999635696411)\n",
      "Epoch 113: (0.029249126091599464, 3.1104564666748047)\n",
      "Epoch 114: (0.026326047256588936, 3.1135683059692383)\n",
      "Epoch 115: (0.023694779723882675, 3.1163694858551025)\n",
      "Epoch 116: (0.021326277405023575, 3.1188907623291016)\n",
      "Epoch 117: (0.019194360822439194, 3.1211602687835693)\n",
      "Epoch 118: (0.01727544330060482, 3.1232030391693115)\n",
      "Epoch 119: (0.015548276714980602, 3.1250417232513428)\n",
      "Epoch 120: (0.013993724249303341, 3.1266965866088867)\n",
      "Loss after step 120: -0.9997422099113464\n",
      "Epoch 121: (0.012594552710652351, 3.128185987472534)\n",
      "Epoch 122: (0.011335243470966816, 3.1295266151428223)\n",
      "Epoch 123: (0.010201825760304928, 3.130733013153076)\n",
      "Epoch 124: (0.00918172113597393, 3.131819009780884)\n",
      "Epoch 125: (0.008263605646789074, 3.132796287536621)\n",
      "Epoch 126: (0.0074372864328324795, 3.1336758136749268)\n",
      "Epoch 127: (0.006693588104099035, 3.134467363357544)\n",
      "Epoch 128: (0.006024251226335764, 3.1351797580718994)\n",
      "Epoch 129: (0.005421841982752085, 3.1358211040496826)\n",
      "Epoch 130: (0.004879669286310673, 3.1363983154296875)\n",
      "Loss after step 130: -0.9999685883522034\n",
      "Epoch 131: (0.004391710739582777, 3.13691782951355)\n",
      "Epoch 132: (0.0039525460451841354, 3.137385368347168)\n",
      "Epoch 133: (0.0035572960041463375, 3.1378061771392822)\n",
      "Epoch 134: (0.003201569663360715, 3.1381847858428955)\n",
      "Epoch 135: (0.0028814151883125305, 3.1385254859924316)\n",
      "Epoch 136: (0.0025932753924280405, 3.1388320922851562)\n",
      "Epoch 137: (0.002333949087187648, 3.139108180999756)\n",
      "Epoch 138: (0.002100554993376136, 3.1393566131591797)\n",
      "Epoch 139: (0.0018905001925304532, 3.139580249786377)\n",
      "Epoch 140: (0.0017014506738632917, 3.1397814750671387)\n",
      "Loss after step 140: -0.9999963045120239\n",
      "Epoch 141: (0.001531305955722928, 3.139962673187256)\n",
      "Epoch 142: (0.0013781756861135364, 3.1401257514953613)\n",
      "Epoch 143: (0.0012403583386912942, 3.140272378921509)\n",
      "Epoch 144: (0.0011163227027282119, 3.140404462814331)\n",
      "Epoch 145: (0.0010046905372291803, 3.1405231952667236)\n",
      "Epoch 146: (0.0009042215533554554, 3.1406302452087402)\n",
      "Epoch 147: (0.0008137994445860386, 3.1407265663146973)\n",
      "Epoch 148: (0.0007324195466935635, 3.140813112258911)\n",
      "Epoch 149: (0.0006591776036657393, 3.1408910751342773)\n",
      "Epoch 150: (0.0005932598724029958, 3.140961170196533)\n",
      "Loss after step 150: -0.9999995231628418\n",
      "Epoch 151: (0.0005339339259080589, 3.141024351119995)\n",
      "Epoch 152: (0.00048054056242108345, 3.1410810947418213)\n",
      "Epoch 153: (0.000432486500358209, 3.141132354736328)\n",
      "Epoch 154: (0.00038923785905353725, 3.1411783695220947)\n",
      "Epoch 155: (0.00035031407605856657, 3.1412198543548584)\n",
      "Epoch 156: (0.0003152826684527099, 3.1412570476531982)\n",
      "Epoch 157: (0.0002837544016074389, 3.1412906646728516)\n",
      "Epoch 158: (0.0002553789527155459, 3.1413209438323975)\n",
      "Epoch 159: (0.00022984105453360826, 3.141348123550415)\n",
      "Epoch 160: (0.00020685694471467286, 3.1413726806640625)\n",
      "Loss after step 160: -1.0\n",
      "Epoch 161: (0.0001861712516983971, 3.14139461517334)\n",
      "Epoch 162: (0.0001675541279837489, 3.1414144039154053)\n",
      "Epoch 163: (0.00015079871809575707, 3.141432285308838)\n",
      "Epoch 164: (0.00013571884483098984, 3.1414482593536377)\n",
      "Epoch 165: (0.0001221469574375078, 3.141462802886963)\n",
      "Epoch 166: (0.00010993226169375703, 3.1414756774902344)\n",
      "Epoch 167: (9.893903188640252e-05, 3.1414873600006104)\n",
      "Epoch 168: (8.904512651497498e-05, 3.141497850418091)\n",
      "Epoch 169: (8.014061313588172e-05, 3.141507387161255)\n",
      "Epoch 170: (7.212655327748507e-05, 3.1415159702301025)\n",
      "Loss after step 170: -1.0\n",
      "Epoch 171: (6.491389649454504e-05, 3.141523599624634)\n",
      "Epoch 172: (5.8422505389899015e-05, 3.1415305137634277)\n",
      "Epoch 173: (5.258025339571759e-05, 3.1415367126464844)\n",
      "Epoch 174: (4.732222805614583e-05, 3.1415421962738037)\n",
      "Epoch 175: (4.2590003431541845e-05, 3.141547203063965)\n",
      "Epoch 176: (3.833100345218554e-05, 3.1415517330169678)\n",
      "Epoch 177: (3.4497901651775464e-05, 3.1415557861328125)\n",
      "Epoch 178: (3.10481118503958e-05, 3.141559362411499)\n",
      "Epoch 179: (2.794330066535622e-05, 3.1415627002716064)\n",
      "Epoch 180: (2.5148970962618478e-05, 3.1415657997131348)\n",
      "Loss after step 180: -1.0\n",
      "Epoch 181: (2.263407441205345e-05, 3.141568422317505)\n",
      "Epoch 182: (2.0370667698443867e-05, 3.141570806503296)\n",
      "Epoch 183: (1.83336014742963e-05, 3.141572952270508)\n",
      "Epoch 184: (1.6500242054462433e-05, 3.1415748596191406)\n",
      "Epoch 185: (1.485021766711725e-05, 3.1415765285491943)\n",
      "Epoch 186: (1.3365195627557114e-05, 3.141578197479248)\n",
      "Epoch 187: (1.2028675882902462e-05, 3.1415796279907227)\n",
      "Epoch 188: (1.0825808203662746e-05, 3.141580820083618)\n",
      "Epoch 189: (9.743227565195411e-06, 3.1415820121765137)\n",
      "Epoch 190: (8.76890499057481e-06, 3.14158296585083)\n",
      "Loss after step 190: -1.0\n",
      "Epoch 191: (7.89201476436574e-06, 3.1415839195251465)\n",
      "Epoch 192: (7.1028134698281065e-06, 3.141584873199463)\n",
      "Epoch 193: (6.392532213794766e-06, 3.1415855884552)\n",
      "Epoch 194: (5.753278855991084e-06, 3.1415863037109375)\n",
      "Epoch 195: (5.177951152290916e-06, 3.141587018966675)\n",
      "Epoch 196: (4.660155809688149e-06, 3.141587495803833)\n",
      "Epoch 197: (4.194140274194069e-06, 3.141587972640991)\n",
      "Epoch 198: (3.7747263377241325e-06, 3.1415884494781494)\n",
      "Epoch 199: (3.3972537494264543e-06, 3.1415889263153076)\n",
      "Epoch 200: (3.057528374483809e-06, 3.141589403152466)\n",
      "Loss after step 200: -1.0\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "We can see that the optimization converges after approximately 160 steps.\n",
    "\n",
    "Substituting this into the theoretical result $⟨ψ∣σ_z∣ψ⟩ = \\cos ϕ_1 \\cos ϕ_2$, we can verify that this is indeed one possible value of the circuit parameters that produces $⟨ψ∣σ_z∣ψ⟩ = −1$, resulting in the qubit being rotated to the state |1⟩.\n",
    "\n"
   ],
   "metadata": {
    "id": "GpzzV6kyr_Z1"
   }
  }
 ]
}
