{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "02e49b8f",
   "metadata": {},
   "source": [
    "# Dedicated Operators\n",
    "\n",
    "[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/brainpy/brainpy/blob/master/docs_version2/tutorial_math/Dedicated_Operators.ipynb)\n",
    "[![Open in Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://github.com/brainpy/brainpy/blob/master/docs_version2/tutorial_math/Dedicated_Operators.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4bcbd04",
   "metadata": {},
   "source": [
    "@[Xiaoyu Chen](mailto:c-xy17@tsinghua.org.cn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11f7f110",
   "metadata": {},
   "source": [
    "In the previous section, we have learned that BrainPy offers `brainpy.math.array` that support common NumPy-like operations, which brings great convenience to users familiar with NumPy. NumPy-like operations, however, does not always apply to brain dynamics modeling. As one may recognize, the most costive computation in brain dynamics modeling lies in synaptic computation, as it involves the connection between two neuron groups. Two of the most salient features of synaptic computation are:\n",
    "1. **Sparse connection**: neuron-neuron connection in most brain regions is sparse.\n",
    "2. **Event-driven synaptic transmission**: synaptic transmission happens only when the presynaptic neurons fire.\n",
    "\n",
    "If we stick to array-related operations, there rise some problems making the computation extremely inefficient:\n",
    "1. Because of sparse connection, keeping using traditional array operations (e.g., matrix multiplication) will cause a huge waste in memory allocation and computation.\n",
    "2. Because of event-driven synaptic transmission, only a small proportion of presynaptic neurons are activated at a time, resulting in sparse transmission from pre- to post-synaptic neuron populations. Traditional array operations will compute the transmission based on the connection from all presynaptic neurons to their postsynaptic elements, unable to exclude those inactive neurons and to avoid redundant computation. \n",
    "\n",
    "To show these two features more intuitively, here is an illustration of computing postsynaptic inputs from presynaptic events and their connection:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d0831d0",
   "metadata": {},
   "source": [
    "<img src=\"../_static/sparse_connection_and_events.png\" width=\"360 px\" align=\"center\">"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eabd0f86",
   "metadata": {},
   "source": [
    "In essence, **the synaptic computation is the multiplication of a vector and a matrix**. If presynaptic events and synaptic connection are both sparse, it will be extremely wasteful to use NumPy-like matrix multiplication.\n",
    "\n",
    "In this section, we are introducing some decicated operators for sparse and event-driven computation to make brain dynamics modeling much more efficient."
   ]
  },
  {
   "cell_type": "code",
   "id": "9c25d255",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-06T04:00:43.892729Z",
     "start_time": "2025-10-06T04:00:39.249483Z"
    }
   },
   "source": [
    "import brainpy as bp\n",
    "import brainpy.math as bm\n",
    "\n",
    "bm.set_platform('cpu')"
   ],
   "outputs": [],
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "id": "018af2b3",
   "metadata": {},
   "source": [
    "## Operators for sparse synaptic computation"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31c1307f",
   "metadata": {},
   "source": [
    "Firstly, let's consider the situation when **the synaptic connection is sparse**. Here is an example of the connection between two groups of neurons:\n",
    "\n",
    "<img src=\"../_static/example_synaptic_connection.png\" width=\"300 px\" align=\"center\">\n",
    "\n",
    "The yellow numbers on the connection lines indicate the connection weights. We can convert the connection pattern into a connection matrix, where each row represents a presynaptic neuron and each column represents a postsynaptic neuron:"
   ]
  },
  {
   "cell_type": "code",
   "id": "4de2e835",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-06T04:00:43.936726Z",
     "start_time": "2025-10-06T04:00:43.897790Z"
    }
   },
   "source": [
    "conn_mat = bm.array([[0, 0, 1, 2, 3], \n",
    "                     [4, 0, 5, 0, 6], \n",
    "                     [0, 7, 0, 0, 0]]).astype(float)\n",
    "conn_mat"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array([[0., 0., 1., 2., 3.],\n",
       "       [4., 0., 5., 0., 6.],\n",
       "       [0., 7., 0., 0., 0.]], dtype=float32)"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 2
  },
  {
   "cell_type": "markdown",
   "id": "5a47598d",
   "metadata": {},
   "source": [
    "Assume that at a given time, presynaptic neurons are active with different values (2, 1, 3), so the postsynaptic inputs from these three presynaptic neurons can be computed as **the multiplication of a vector and a sparse matrix**:\n",
    "\n",
    "<img src=\"../_static/sparse_matrix_multiplication.png\" width=\"450 px\" align=\"center\">\n",
    "\n",
    "In which the presynaptic activity is displayed as a vector."
   ]
  },
  {
   "cell_type": "code",
   "id": "23a29d52",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-06T04:00:43.982378Z",
     "start_time": "2025-10-06T04:00:43.939350Z"
    }
   },
   "source": [
    "pre_activity = bm.array([2., 1., 3.])\n",
    "\n",
    "bm.matmul(pre_activity, conn_mat)"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array(value=Array([ 4., 21.,  7.,  4., 12.]), dtype=float32)"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 3
  },
  {
   "cell_type": "markdown",
   "id": "99805a6e",
   "metadata": {},
   "source": [
    "While this multiplication is intuitive, the computation is inefficient if the connection matrix is sparse. To save memory storage and improve running efficiency, we can use other data structures to store the sparse connection without so many empty values. One of the data structures is the **Compressed Sparse Row (CSR) matrix**.\n",
    "\n",
    "Simply speaking, the CSR matrix stores the connection information by three vectors: the non-zero weight values in the connection matrix, the post-synaptic neuron indices, and the corresponding presynaptic neuron pointers (users can see the `pre2post` connection properties in [Synaptic Connections](../tutorial_toolbox/synaptic_connetions.ipynb) for more information). Without knowing the details, the vectors can still easily be obtained by the following operations: "
   ]
  },
  {
   "cell_type": "code",
   "id": "bed675e1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-06T04:01:41.535399Z",
     "start_time": "2025-10-06T04:01:40.698492Z"
    }
   },
   "source": [
    "data = conn_mat[bm.as_numpy(bm.nonzero(conn_mat))]\n",
    "\n",
    "# difine a connection from a connection matrix by brainpy.conn.Matconn \n",
    "connection = bp.conn.MatConn(conn_mat)\n",
    "# obtain these properties by .require('pre2post')\n",
    "indices, indptr = connection(conn_mat.shape[0], conn_mat.shape[1]).require('pre2post')\n",
    "\n",
    "data, indices, indptr"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\jax\\_src\\ops\\scatter.py:108: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int32 to dtype=uint32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(Array([[[0., 0., 1., 2., 3.],\n",
       "         [0., 0., 1., 2., 3.],\n",
       "         [0., 0., 1., 2., 3.],\n",
       "         [4., 0., 5., 0., 6.],\n",
       "         [4., 0., 5., 0., 6.],\n",
       "         [4., 0., 5., 0., 6.],\n",
       "         [0., 7., 0., 0., 0.]],\n",
       " \n",
       "        [[0., 7., 0., 0., 0.],\n",
       "         [0., 7., 0., 0., 0.],\n",
       "         [0., 7., 0., 0., 0.],\n",
       "         [0., 0., 1., 2., 3.],\n",
       "         [0., 7., 0., 0., 0.],\n",
       "         [0., 7., 0., 0., 0.],\n",
       "         [4., 0., 5., 0., 6.]]], dtype=float32),\n",
       " Array([2, 3, 4, 0, 2, 4, 1], dtype=int32),\n",
       " Array([0, 3, 6, 7], dtype=int32))"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 6
  },
  {
   "cell_type": "markdown",
   "id": "ca92801d",
   "metadata": {},
   "source": [
    "Then we can use the dedicated operators `brainpy.math.sparse.csrmv` to compute the postsynaptic inputs (i.e., the product of a **CSR** **M**atrix and a **V**ector):"
   ]
  },
  {
   "cell_type": "code",
   "id": "ce6fd5b0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-10-06T04:01:49.001126Z",
     "start_time": "2025-10-06T04:01:43.983999Z"
    }
   },
   "source": [
    "bm.sparse.csrmv(data, indices=indices, indptr=indptr, vector=pre_activity, \n",
    "                shape=conn_mat.shape, transpose=True)"
   ],
   "outputs": [
    {
     "ename": "TypingError",
     "evalue": "Failed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1m\u001B[1m\u001B[1mFailed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1mNo implementation of function Function(<built-in function iadd>) found for signature:\n \n >>> iadd(float32, array(float32, 2d, C))\n \nThere are 18 candidate implementations:\n\u001B[1m  - Of which 16 did not match due to:\n  Overload of function 'iadd': File: <numerous>: Line N/A.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match.\u001B[0m\n\u001B[1m  - Of which 2 did not match due to:\n  Operator Overload in function 'iadd': File: unknown: Line unknown.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match for registered cases:\n    * (int64, int64) -> int64\n    * (int64, uint64) -> int64\n    * (uint64, int64) -> int64\n    * (uint64, uint64) -> uint64\n    * (float32, float32) -> float32\n    * (float64, float64) -> float64\n    * (complex64, complex64) -> complex64\n    * (complex128, complex128) -> complex128\u001B[0m\n\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of intrinsic-call at C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py (105)\u001B[0m\n\u001B[1m\nFile \"C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py\", line 105:\u001B[0m\n\u001B[1m            def mv(weights, indices, indptr, vector, _, posts):\n                <source elided>\n                    for j in range(indptr[i], indptr[i + 1]):\n\u001B[1m                        posts[indices[j]] += weights[j] * sp\n\u001B[0m                        \u001B[1m^\u001B[0m\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[1m\nFile \"D:\\codes\\projects\\BrainPy\\docs_version2\\tutorial_math\", line 8:\u001B[0m\n\u001B[1m<source missing, REPL/exec in use?>\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m",
     "output_type": "error",
     "traceback": [
      "\u001B[31m---------------------------------------------------------------------------\u001B[39m",
      "\u001B[31mJaxStackTraceBeforeTransformation\u001B[39m         Traceback (most recent call last)",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\runpy.py:198\u001B[39m, in \u001B[36m_run_module_as_main\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    197\u001B[39m     sys.argv[\u001B[32m0\u001B[39m] = mod_spec.origin\n\u001B[32m--> \u001B[39m\u001B[32m198\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m _run_code(code, main_globals, \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[32m    199\u001B[39m                  \u001B[33m\"\u001B[39m\u001B[33m__main__\u001B[39m\u001B[33m\"\u001B[39m, mod_spec)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\runpy.py:88\u001B[39m, in \u001B[36m_run_code\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m     81\u001B[39m run_globals.update(\u001B[34m__name__\u001B[39m = mod_name,\n\u001B[32m     82\u001B[39m                    \u001B[34m__file__\u001B[39m = fname,\n\u001B[32m     83\u001B[39m                    __cached__ = cached,\n\u001B[32m   (...)\u001B[39m\u001B[32m     86\u001B[39m                    __package__ = pkg_name,\n\u001B[32m     87\u001B[39m                    __spec__ = mod_spec)\n\u001B[32m---> \u001B[39m\u001B[32m88\u001B[39m exec(code, run_globals)\n\u001B[32m     89\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m run_globals\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel_launcher.py:18\u001B[39m\n\u001B[32m     16\u001B[39m \u001B[38;5;28;01mfrom\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[34;01mipykernel\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[38;5;28;01mimport\u001B[39;00m kernelapp \u001B[38;5;28;01mas\u001B[39;00m app\n\u001B[32m---> \u001B[39m\u001B[32m18\u001B[39m app.launch_new_instance()\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\traitlets\\config\\application.py:1075\u001B[39m, in \u001B[36mlaunch_instance\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   1074\u001B[39m app.initialize(argv)\n\u001B[32m-> \u001B[39m\u001B[32m1075\u001B[39m app.start()\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\kernelapp.py:739\u001B[39m, in \u001B[36mstart\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    738\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m739\u001B[39m     \u001B[38;5;28mself\u001B[39m.io_loop.start()\n\u001B[32m    740\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mKeyboardInterrupt\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\tornado\\platform\\asyncio.py:205\u001B[39m, in \u001B[36mstart\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    204\u001B[39m \u001B[38;5;28;01mdef\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[34mstart\u001B[39m(\u001B[38;5;28mself\u001B[39m) -> \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m205\u001B[39m     \u001B[38;5;28mself\u001B[39m.asyncio_loop.run_forever()\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\asyncio\\base_events.py:640\u001B[39m, in \u001B[36mrun_forever\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    639\u001B[39m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28;01mTrue\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m640\u001B[39m     \u001B[38;5;28mself\u001B[39m._run_once()\n\u001B[32m    641\u001B[39m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m._stopping:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\asyncio\\base_events.py:1992\u001B[39m, in \u001B[36m_run_once\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   1991\u001B[39m     \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m-> \u001B[39m\u001B[32m1992\u001B[39m         handle._run()\n\u001B[32m   1993\u001B[39m handle = \u001B[38;5;28;01mNone\u001B[39;00m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\asyncio\\events.py:88\u001B[39m, in \u001B[36m_run\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m     87\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m---> \u001B[39m\u001B[32m88\u001B[39m     \u001B[38;5;28mself\u001B[39m._context.run(\u001B[38;5;28mself\u001B[39m._callback, *\u001B[38;5;28mself\u001B[39m._args)\n\u001B[32m     89\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m (\u001B[38;5;167;01mSystemExit\u001B[39;00m, \u001B[38;5;167;01mKeyboardInterrupt\u001B[39;00m):\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\kernelbase.py:545\u001B[39m, in \u001B[36mdispatch_queue\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    544\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m545\u001B[39m     \u001B[38;5;28;01mawait\u001B[39;00m \u001B[38;5;28mself\u001B[39m.process_one()\n\u001B[32m    546\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mException\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\kernelbase.py:534\u001B[39m, in \u001B[36mprocess_one\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    533\u001B[39m         \u001B[38;5;28;01mreturn\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m534\u001B[39m \u001B[38;5;28;01mawait\u001B[39;00m dispatch(*args)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\kernelbase.py:437\u001B[39m, in \u001B[36mdispatch_shell\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    436\u001B[39m     \u001B[38;5;28;01mif\u001B[39;00m inspect.isawaitable(result):\n\u001B[32m--> \u001B[39m\u001B[32m437\u001B[39m         \u001B[38;5;28;01mawait\u001B[39;00m result\n\u001B[32m    438\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mException\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\ipkernel.py:362\u001B[39m, in \u001B[36mexecute_request\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    361\u001B[39m \u001B[38;5;28mself\u001B[39m._associate_new_top_level_threads_with(parent_header)\n\u001B[32m--> \u001B[39m\u001B[32m362\u001B[39m \u001B[38;5;28;01mawait\u001B[39;00m \u001B[38;5;28msuper\u001B[39m().execute_request(stream, ident, parent)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\kernelbase.py:778\u001B[39m, in \u001B[36mexecute_request\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    777\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m inspect.isawaitable(reply_content):\n\u001B[32m--> \u001B[39m\u001B[32m778\u001B[39m     reply_content = \u001B[38;5;28;01mawait\u001B[39;00m reply_content\n\u001B[32m    780\u001B[39m \u001B[38;5;66;03m# Flush output before sending the reply.\u001B[39;00m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\ipkernel.py:449\u001B[39m, in \u001B[36mdo_execute\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    448\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m accepts_params[\u001B[33m\"\u001B[39m\u001B[33mcell_id\u001B[39m\u001B[33m\"\u001B[39m]:\n\u001B[32m--> \u001B[39m\u001B[32m449\u001B[39m     res = shell.run_cell(\n\u001B[32m    450\u001B[39m         code,\n\u001B[32m    451\u001B[39m         store_history=store_history,\n\u001B[32m    452\u001B[39m         silent=silent,\n\u001B[32m    453\u001B[39m         cell_id=cell_id,\n\u001B[32m    454\u001B[39m     )\n\u001B[32m    455\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\ipykernel\\zmqshell.py:549\u001B[39m, in \u001B[36mrun_cell\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    548\u001B[39m \u001B[38;5;28mself\u001B[39m._last_traceback = \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m549\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28msuper\u001B[39m().run_cell(*args, **kwargs)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\interactiveshell.py:3116\u001B[39m, in \u001B[36mrun_cell\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   3115\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m-> \u001B[39m\u001B[32m3116\u001B[39m     result = \u001B[38;5;28mself\u001B[39m._run_cell(\n\u001B[32m   3117\u001B[39m         raw_cell, store_history, silent, shell_futures, cell_id\n\u001B[32m   3118\u001B[39m     )\n\u001B[32m   3119\u001B[39m \u001B[38;5;28;01mfinally\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\interactiveshell.py:3171\u001B[39m, in \u001B[36m_run_cell\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   3170\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m-> \u001B[39m\u001B[32m3171\u001B[39m     result = runner(coro)\n\u001B[32m   3172\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\async_helpers.py:128\u001B[39m, in \u001B[36m_pseudo_sync_runner\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    127\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m128\u001B[39m     coro.send(\u001B[38;5;28;01mNone\u001B[39;00m)\n\u001B[32m    129\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mStopIteration\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m exc:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\interactiveshell.py:3394\u001B[39m, in \u001B[36mrun_cell_async\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   3391\u001B[39m interactivity = \u001B[33m\"\u001B[39m\u001B[33mnone\u001B[39m\u001B[33m\"\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m silent \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m.ast_node_interactivity\n\u001B[32m-> \u001B[39m\u001B[32m3394\u001B[39m has_raised = \u001B[38;5;28;01mawait\u001B[39;00m \u001B[38;5;28mself\u001B[39m.run_ast_nodes(code_ast.body, cell_name,\n\u001B[32m   3395\u001B[39m        interactivity=interactivity, compiler=compiler, result=result)\n\u001B[32m   3397\u001B[39m \u001B[38;5;28mself\u001B[39m.last_execution_succeeded = \u001B[38;5;129;01mnot\u001B[39;00m has_raised\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\interactiveshell.py:3639\u001B[39m, in \u001B[36mrun_ast_nodes\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   3638\u001B[39m     asy = compare(code)\n\u001B[32m-> \u001B[39m\u001B[32m3639\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28;01mawait\u001B[39;00m \u001B[38;5;28mself\u001B[39m.run_code(code, result, async_=asy):\n\u001B[32m   3640\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;01mTrue\u001B[39;00m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\IPython\\core\\interactiveshell.py:3699\u001B[39m, in \u001B[36mrun_code\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m   3698\u001B[39m     \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m-> \u001B[39m\u001B[32m3699\u001B[39m         exec(code_obj, \u001B[38;5;28mself\u001B[39m.user_global_ns, \u001B[38;5;28mself\u001B[39m.user_ns)\n\u001B[32m   3700\u001B[39m \u001B[38;5;28;01mfinally\u001B[39;00m:\n\u001B[32m   3701\u001B[39m     \u001B[38;5;66;03m# Reset our crash handler in place\u001B[39;00m\n",
      "\u001B[36mCell\u001B[39m\u001B[36m \u001B[39m\u001B[32mIn[7]\u001B[39m\u001B[32m, line 1\u001B[39m\n\u001B[32m----> \u001B[39m\u001B[32m1\u001B[39m bm.sparse.csrmv(data, indices=indices, indptr=indptr, vector=pre_activity, \n\u001B[32m      2\u001B[39m                 shape=conn_mat.shape, transpose=\u001B[38;5;28;01mTrue\u001B[39;00m)\n",
      "\u001B[36mFile \u001B[39m\u001B[32mD:\\codes\\projects\\BrainPy\\brainpy\\version2\\math\\sparse\\csr_mv.py:72\u001B[39m, in \u001B[36mcsrmv\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m     71\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m transpose:\n\u001B[32m---> \u001B[39m\u001B[32m72\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m vector @ csr\n\u001B[32m     73\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr.py:588\u001B[39m, in \u001B[36m__rmatmul__\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    587\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m other.ndim == \u001B[32m1\u001B[39m:\n\u001B[32m--> \u001B[39m\u001B[32m588\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m csr_matvec(\n\u001B[32m    589\u001B[39m         data,\n\u001B[32m    590\u001B[39m         \u001B[38;5;28mself\u001B[39m.indices,\n\u001B[32m    591\u001B[39m         \u001B[38;5;28mself\u001B[39m.indptr,\n\u001B[32m    592\u001B[39m         other,\n\u001B[32m    593\u001B[39m         shape=\u001B[38;5;28mself\u001B[39m.shape,\n\u001B[32m    594\u001B[39m         transpose=\u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[32m    595\u001B[39m     )\n\u001B[32m    596\u001B[39m \u001B[38;5;28;01melif\u001B[39;00m other.ndim == \u001B[32m2\u001B[39m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py:64\u001B[39m, in \u001B[36mcsr_matvec\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m     63\u001B[39m v, unitv = u.split_mantissa_unit(v)\n\u001B[32m---> \u001B[39m\u001B[32m64\u001B[39m res = csrmv_p_call(data, indices, indptr, v, shape=shape, transpose=transpose)[\u001B[32m0\u001B[39m]\n\u001B[32m     65\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m u.maybe_decimal(res * unitd * unitv)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py:523\u001B[39m, in \u001B[36mcsrmv_p_call\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    518\u001B[39m out_info = (\n\u001B[32m    519\u001B[39m     jax.ShapeDtypeStruct([shape[\u001B[32m1\u001B[39m]], weights.dtype)\n\u001B[32m    520\u001B[39m     \u001B[38;5;28;01mif\u001B[39;00m transpose \u001B[38;5;28;01melse\u001B[39;00m\n\u001B[32m    521\u001B[39m     jax.ShapeDtypeStruct([shape[\u001B[32m0\u001B[39m]], weights.dtype)\n\u001B[32m    522\u001B[39m )\n\u001B[32m--> \u001B[39m\u001B[32m523\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m csrmv_p(\n\u001B[32m    524\u001B[39m     weights,\n\u001B[32m    525\u001B[39m     indices,\n\u001B[32m    526\u001B[39m     indptr,\n\u001B[32m    527\u001B[39m     vector,\n\u001B[32m    528\u001B[39m     jnp.zeros(out_info.shape, out_info.dtype),\n\u001B[32m    529\u001B[39m     outs=[out_info],\n\u001B[32m    530\u001B[39m     shape=shape,\n\u001B[32m    531\u001B[39m     transpose=transpose,\n\u001B[32m    532\u001B[39m     indices_info=jax.ShapeDtypeStruct(indices.shape, indices.dtype),\n\u001B[32m    533\u001B[39m     indptr_info=jax.ShapeDtypeStruct(indptr.shape, indptr.dtype),\n\u001B[32m    534\u001B[39m     weight_info=jax.ShapeDtypeStruct(weights.shape, weights.dtype),\n\u001B[32m    535\u001B[39m     vector_info=jax.ShapeDtypeStruct(vector.shape, vector.dtype),\n\u001B[32m    536\u001B[39m )\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_xla_custom_op.py:359\u001B[39m, in \u001B[36m__call__\u001B[39m\u001B[34m()\u001B[39m\n\u001B[32m    358\u001B[39m outs, tree_def = jax.tree.flatten(outs)\n\u001B[32m--> \u001B[39m\u001B[32m359\u001B[39m r = \u001B[38;5;28mself\u001B[39m.primitive.bind(\n\u001B[32m    360\u001B[39m     *ins,\n\u001B[32m    361\u001B[39m     **kwargs,\n\u001B[32m    362\u001B[39m     outs=\u001B[38;5;28mtuple\u001B[39m(outs),\n\u001B[32m    363\u001B[39m )\n\u001B[32m    364\u001B[39m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28mlen\u001B[39m(r) == \u001B[38;5;28mlen\u001B[39m(outs), \u001B[33m'\u001B[39m\u001B[33mThe number of outputs does not match the expected.\u001B[39m\u001B[33m'\u001B[39m\n",
      "\u001B[31mJaxStackTraceBeforeTransformation\u001B[39m: numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1m\u001B[1m\u001B[1mFailed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1mNo implementation of function Function(<built-in function iadd>) found for signature:\n \n >>> iadd(float32, array(float32, 2d, C))\n \nThere are 18 candidate implementations:\n\u001B[1m  - Of which 16 did not match due to:\n  Overload of function 'iadd': File: <numerous>: Line N/A.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match.\u001B[0m\n\u001B[1m  - Of which 2 did not match due to:\n  Operator Overload in function 'iadd': File: unknown: Line unknown.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match for registered cases:\n    * (int64, int64) -> int64\n    * (int64, uint64) -> int64\n    * (uint64, int64) -> int64\n    * (uint64, uint64) -> uint64\n    * (float32, float32) -> float32\n    * (float64, float64) -> float64\n    * (complex64, complex64) -> complex64\n    * (complex128, complex128) -> complex128\u001B[0m\n\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of intrinsic-call at C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py (105)\u001B[0m\n\u001B[1m\nFile \"C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py\", line 105:\u001B[0m\n\u001B[1m            def mv(weights, indices, indptr, vector, _, posts):\n                <source elided>\n                    for j in range(indptr[i], indptr[i + 1]):\n\u001B[1m                        posts[indices[j]] += weights[j] * sp\n\u001B[0m                        \u001B[1m^\u001B[0m\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[1m\nFile \"D:\\codes\\projects\\BrainPy\\docs_version2\\tutorial_math\", line 8:\u001B[0m\n\u001B[1m<source missing, REPL/exec in use?>\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m\n\nThe preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.\n\n--------------------",
      "\nThe above exception was the direct cause of the following exception:\n",
      "\u001B[31mTypingError\u001B[39m                               Traceback (most recent call last)",
      "\u001B[36mCell\u001B[39m\u001B[36m \u001B[39m\u001B[32mIn[7]\u001B[39m\u001B[32m, line 1\u001B[39m\n\u001B[32m----> \u001B[39m\u001B[32m1\u001B[39m bm.sparse.csrmv(data, indices=indices, indptr=indptr, vector=pre_activity, \n\u001B[32m      2\u001B[39m                 shape=conn_mat.shape, transpose=\u001B[38;5;28;01mTrue\u001B[39;00m)\n",
      "\u001B[36mFile \u001B[39m\u001B[32mD:\\codes\\projects\\BrainPy\\brainpy\\version2\\math\\sparse\\csr_mv.py:72\u001B[39m, in \u001B[36mcsrmv\u001B[39m\u001B[34m(data, indices, indptr, vector, shape, transpose)\u001B[39m\n\u001B[32m     70\u001B[39m csr = brainevent.CSR((data, indices, indptr), shape=shape)\n\u001B[32m     71\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m transpose:\n\u001B[32m---> \u001B[39m\u001B[32m72\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m vector @ csr\n\u001B[32m     73\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m     74\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m csr @ vector\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr.py:588\u001B[39m, in \u001B[36mCSR.__rmatmul__\u001B[39m\u001B[34m(self, other)\u001B[39m\n\u001B[32m    586\u001B[39m data, other = u.math.promote_dtypes(\u001B[38;5;28mself\u001B[39m.data, other)\n\u001B[32m    587\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m other.ndim == \u001B[32m1\u001B[39m:\n\u001B[32m--> \u001B[39m\u001B[32m588\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m csr_matvec(\n\u001B[32m    589\u001B[39m         data,\n\u001B[32m    590\u001B[39m         \u001B[38;5;28mself\u001B[39m.indices,\n\u001B[32m    591\u001B[39m         \u001B[38;5;28mself\u001B[39m.indptr,\n\u001B[32m    592\u001B[39m         other,\n\u001B[32m    593\u001B[39m         shape=\u001B[38;5;28mself\u001B[39m.shape,\n\u001B[32m    594\u001B[39m         transpose=\u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[32m    595\u001B[39m     )\n\u001B[32m    596\u001B[39m \u001B[38;5;28;01melif\u001B[39;00m other.ndim == \u001B[32m2\u001B[39m:\n\u001B[32m    597\u001B[39m     other = other.T\n",
      "    \u001B[31m[... skipping hidden 16 frame]\u001B[39m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_xla_custom_op_numba.py:235\u001B[39m, in \u001B[36m_numba_mlir_cpu_translation_rule\u001B[39m\u001B[34m(kernel_generator, debug, ctx, *ins, **kwargs)\u001B[39m\n\u001B[32m    232\u001B[39m new_f = code_scope[\u001B[33m'\u001B[39m\u001B[33mnumba_cpu_custom_call_target\u001B[39m\u001B[33m'\u001B[39m]\n\u001B[32m    234\u001B[39m \u001B[38;5;66;03m# register\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m235\u001B[39m xla_c_rule = cfunc(sig)(new_f)\n\u001B[32m    236\u001B[39m target_name = \u001B[33mf\u001B[39m\u001B[33m'\u001B[39m\u001B[33mbrainevent_numba_call_\u001B[39m\u001B[38;5;132;01m{\u001B[39;00m\u001B[38;5;28mstr\u001B[39m(xla_c_rule.address)\u001B[38;5;132;01m}\u001B[39;00m\u001B[33m'\u001B[39m\n\u001B[32m    238\u001B[39m PyCapsule_Destructor = ctypes.CFUNCTYPE(\u001B[38;5;28;01mNone\u001B[39;00m, ctypes.py_object)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\decorators.py:275\u001B[39m, in \u001B[36mcfunc.<locals>.wrapper\u001B[39m\u001B[34m(func)\u001B[39m\n\u001B[32m    273\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m cache:\n\u001B[32m    274\u001B[39m     res.enable_caching()\n\u001B[32m--> \u001B[39m\u001B[32m275\u001B[39m res.compile()\n\u001B[32m    276\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m res\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_lock.py:35\u001B[39m, in \u001B[36m_CompilerLock.__call__.<locals>._acquire_compile_lock\u001B[39m\u001B[34m(*args, **kwargs)\u001B[39m\n\u001B[32m     32\u001B[39m \u001B[38;5;129m@functools\u001B[39m.wraps(func)\n\u001B[32m     33\u001B[39m \u001B[38;5;28;01mdef\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[34m_acquire_compile_lock\u001B[39m(*args, **kwargs):\n\u001B[32m     34\u001B[39m     \u001B[38;5;28;01mwith\u001B[39;00m \u001B[38;5;28mself\u001B[39m:\n\u001B[32m---> \u001B[39m\u001B[32m35\u001B[39m         \u001B[38;5;28;01mreturn\u001B[39;00m func(*args, **kwargs)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\ccallback.py:68\u001B[39m, in \u001B[36mCFunc.compile\u001B[39m\u001B[34m(self)\u001B[39m\n\u001B[32m     65\u001B[39m cres = \u001B[38;5;28mself\u001B[39m._cache.load_overload(\u001B[38;5;28mself\u001B[39m._sig,\n\u001B[32m     66\u001B[39m                                  \u001B[38;5;28mself\u001B[39m._targetdescr.target_context)\n\u001B[32m     67\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m cres \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[32m---> \u001B[39m\u001B[32m68\u001B[39m     cres = \u001B[38;5;28mself\u001B[39m._compile_uncached()\n\u001B[32m     69\u001B[39m     \u001B[38;5;28mself\u001B[39m._cache.save_overload(\u001B[38;5;28mself\u001B[39m._sig, cres)\n\u001B[32m     70\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\ccallback.py:82\u001B[39m, in \u001B[36mCFunc._compile_uncached\u001B[39m\u001B[34m(self)\u001B[39m\n\u001B[32m     79\u001B[39m sig = \u001B[38;5;28mself\u001B[39m._sig\n\u001B[32m     81\u001B[39m \u001B[38;5;66;03m# Compile native function as well as cfunc wrapper\u001B[39;00m\n\u001B[32m---> \u001B[39m\u001B[32m82\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m._compiler.compile(sig.args, sig.return_type)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\dispatcher.py:84\u001B[39m, in \u001B[36m_FunctionCompiler.compile\u001B[39m\u001B[34m(self, args, return_type)\u001B[39m\n\u001B[32m     82\u001B[39m     \u001B[38;5;28;01mreturn\u001B[39;00m retval\n\u001B[32m     83\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m---> \u001B[39m\u001B[32m84\u001B[39m     \u001B[38;5;28;01mraise\u001B[39;00m retval\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\dispatcher.py:94\u001B[39m, in \u001B[36m_FunctionCompiler._compile_cached\u001B[39m\u001B[34m(self, args, return_type)\u001B[39m\n\u001B[32m     91\u001B[39m     \u001B[38;5;28;01mpass\u001B[39;00m\n\u001B[32m     93\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m---> \u001B[39m\u001B[32m94\u001B[39m     retval = \u001B[38;5;28mself\u001B[39m._compile_core(args, return_type)\n\u001B[32m     95\u001B[39m \u001B[38;5;28;01mexcept\u001B[39;00m errors.TypingError \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[32m     96\u001B[39m     \u001B[38;5;28mself\u001B[39m._failed_cache[key] = e\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\dispatcher.py:107\u001B[39m, in \u001B[36m_FunctionCompiler._compile_core\u001B[39m\u001B[34m(self, args, return_type)\u001B[39m\n\u001B[32m    104\u001B[39m flags = \u001B[38;5;28mself\u001B[39m._customize_flags(flags)\n\u001B[32m    106\u001B[39m impl = \u001B[38;5;28mself\u001B[39m._get_implementation(args, {})\n\u001B[32m--> \u001B[39m\u001B[32m107\u001B[39m cres = compiler.compile_extra(\u001B[38;5;28mself\u001B[39m.targetdescr.typing_context,\n\u001B[32m    108\u001B[39m                               \u001B[38;5;28mself\u001B[39m.targetdescr.target_context,\n\u001B[32m    109\u001B[39m                               impl,\n\u001B[32m    110\u001B[39m                               args=args, return_type=return_type,\n\u001B[32m    111\u001B[39m                               flags=flags, \u001B[38;5;28mlocals\u001B[39m=\u001B[38;5;28mself\u001B[39m.locals,\n\u001B[32m    112\u001B[39m                               pipeline_class=\u001B[38;5;28mself\u001B[39m.pipeline_class)\n\u001B[32m    113\u001B[39m \u001B[38;5;66;03m# Check typing error if object mode is used\u001B[39;00m\n\u001B[32m    114\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m cres.typing_error \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m \u001B[38;5;129;01mand\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m flags.enable_pyobject:\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler.py:739\u001B[39m, in \u001B[36mcompile_extra\u001B[39m\u001B[34m(typingctx, targetctx, func, args, return_type, flags, locals, library, pipeline_class)\u001B[39m\n\u001B[32m    715\u001B[39m \u001B[38;5;250m\u001B[39m\u001B[33;03m\"\"\"Compiler entry point\u001B[39;00m\n\u001B[32m    716\u001B[39m \n\u001B[32m    717\u001B[39m \u001B[33;03mParameter\u001B[39;00m\n\u001B[32m   (...)\u001B[39m\u001B[32m    735\u001B[39m \u001B[33;03m    compiler pipeline\u001B[39;00m\n\u001B[32m    736\u001B[39m \u001B[33;03m\"\"\"\u001B[39;00m\n\u001B[32m    737\u001B[39m pipeline = pipeline_class(typingctx, targetctx, library,\n\u001B[32m    738\u001B[39m                           args, return_type, flags, \u001B[38;5;28mlocals\u001B[39m)\n\u001B[32m--> \u001B[39m\u001B[32m739\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m pipeline.compile_extra(func)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler.py:439\u001B[39m, in \u001B[36mCompilerBase.compile_extra\u001B[39m\u001B[34m(self, func)\u001B[39m\n\u001B[32m    437\u001B[39m \u001B[38;5;28mself\u001B[39m.state.lifted = ()\n\u001B[32m    438\u001B[39m \u001B[38;5;28mself\u001B[39m.state.lifted_from = \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m439\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m._compile_bytecode()\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler.py:505\u001B[39m, in \u001B[36mCompilerBase._compile_bytecode\u001B[39m\u001B[34m(self)\u001B[39m\n\u001B[32m    501\u001B[39m \u001B[38;5;250m\u001B[39m\u001B[33;03m\"\"\"\u001B[39;00m\n\u001B[32m    502\u001B[39m \u001B[33;03mPopulate and run pipeline for bytecode input\u001B[39;00m\n\u001B[32m    503\u001B[39m \u001B[33;03m\"\"\"\u001B[39;00m\n\u001B[32m    504\u001B[39m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28mself\u001B[39m.state.func_ir \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m505\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m._compile_core()\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler.py:484\u001B[39m, in \u001B[36mCompilerBase._compile_core\u001B[39m\u001B[34m(self)\u001B[39m\n\u001B[32m    482\u001B[39m         \u001B[38;5;28mself\u001B[39m.state.status.fail_reason = e\n\u001B[32m    483\u001B[39m         \u001B[38;5;28;01mif\u001B[39;00m is_final_pipeline:\n\u001B[32m--> \u001B[39m\u001B[32m484\u001B[39m             \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[32m    485\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m    486\u001B[39m     \u001B[38;5;28;01mraise\u001B[39;00m CompilerError(\u001B[33m\"\u001B[39m\u001B[33mAll available pipelines exhausted\u001B[39m\u001B[33m\"\u001B[39m)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler.py:473\u001B[39m, in \u001B[36mCompilerBase._compile_core\u001B[39m\u001B[34m(self)\u001B[39m\n\u001B[32m    471\u001B[39m res = \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[32m    472\u001B[39m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[32m--> \u001B[39m\u001B[32m473\u001B[39m     pm.run(\u001B[38;5;28mself\u001B[39m.state)\n\u001B[32m    474\u001B[39m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m.state.cr \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[32m    475\u001B[39m         \u001B[38;5;28;01mbreak\u001B[39;00m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_machinery.py:367\u001B[39m, in \u001B[36mPassManager.run\u001B[39m\u001B[34m(self, state)\u001B[39m\n\u001B[32m    364\u001B[39m msg = \u001B[33m\"\u001B[39m\u001B[33mFailed in \u001B[39m\u001B[38;5;132;01m%s\u001B[39;00m\u001B[33m mode pipeline (step: \u001B[39m\u001B[38;5;132;01m%s\u001B[39;00m\u001B[33m)\u001B[39m\u001B[33m\"\u001B[39m % \\\n\u001B[32m    365\u001B[39m     (\u001B[38;5;28mself\u001B[39m.pipeline_name, pass_desc)\n\u001B[32m    366\u001B[39m patched_exception = \u001B[38;5;28mself\u001B[39m._patch_error(msg, e)\n\u001B[32m--> \u001B[39m\u001B[32m367\u001B[39m \u001B[38;5;28;01mraise\u001B[39;00m patched_exception\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_machinery.py:356\u001B[39m, in \u001B[36mPassManager.run\u001B[39m\u001B[34m(self, state)\u001B[39m\n\u001B[32m    354\u001B[39m pass_inst = _pass_registry.get(pss).pass_inst\n\u001B[32m    355\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(pass_inst, CompilerPass):\n\u001B[32m--> \u001B[39m\u001B[32m356\u001B[39m     \u001B[38;5;28mself\u001B[39m._runPass(idx, pass_inst, state)\n\u001B[32m    357\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m    358\u001B[39m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m(\u001B[33m\"\u001B[39m\u001B[33mLegacy pass in use\u001B[39m\u001B[33m\"\u001B[39m)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_lock.py:35\u001B[39m, in \u001B[36m_CompilerLock.__call__.<locals>._acquire_compile_lock\u001B[39m\u001B[34m(*args, **kwargs)\u001B[39m\n\u001B[32m     32\u001B[39m \u001B[38;5;129m@functools\u001B[39m.wraps(func)\n\u001B[32m     33\u001B[39m \u001B[38;5;28;01mdef\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[34m_acquire_compile_lock\u001B[39m(*args, **kwargs):\n\u001B[32m     34\u001B[39m     \u001B[38;5;28;01mwith\u001B[39;00m \u001B[38;5;28mself\u001B[39m:\n\u001B[32m---> \u001B[39m\u001B[32m35\u001B[39m         \u001B[38;5;28;01mreturn\u001B[39;00m func(*args, **kwargs)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_machinery.py:311\u001B[39m, in \u001B[36mPassManager._runPass\u001B[39m\u001B[34m(self, index, pss, internal_state)\u001B[39m\n\u001B[32m    309\u001B[39m     mutated |= check(pss.run_initialization, internal_state)\n\u001B[32m    310\u001B[39m \u001B[38;5;28;01mwith\u001B[39;00m SimpleTimer() \u001B[38;5;28;01mas\u001B[39;00m pass_time:\n\u001B[32m--> \u001B[39m\u001B[32m311\u001B[39m     mutated |= check(pss.run_pass, internal_state)\n\u001B[32m    312\u001B[39m \u001B[38;5;28;01mwith\u001B[39;00m SimpleTimer() \u001B[38;5;28;01mas\u001B[39;00m finalize_time:\n\u001B[32m    313\u001B[39m     mutated |= check(pss.run_finalizer, internal_state)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\compiler_machinery.py:272\u001B[39m, in \u001B[36mPassManager._runPass.<locals>.check\u001B[39m\u001B[34m(func, compiler_state)\u001B[39m\n\u001B[32m    271\u001B[39m \u001B[38;5;28;01mdef\u001B[39;00m\u001B[38;5;250m \u001B[39m\u001B[34mcheck\u001B[39m(func, compiler_state):\n\u001B[32m--> \u001B[39m\u001B[32m272\u001B[39m     mangled = func(compiler_state)\n\u001B[32m    273\u001B[39m     \u001B[38;5;28;01mif\u001B[39;00m mangled \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;129;01min\u001B[39;00m (\u001B[38;5;28;01mTrue\u001B[39;00m, \u001B[38;5;28;01mFalse\u001B[39;00m):\n\u001B[32m    274\u001B[39m         msg = (\u001B[33m\"\u001B[39m\u001B[33mCompilerPass implementations should return True/False. \u001B[39m\u001B[33m\"\u001B[39m\n\u001B[32m    275\u001B[39m                \u001B[33m\"\u001B[39m\u001B[33mCompilerPass with name \u001B[39m\u001B[33m'\u001B[39m\u001B[38;5;132;01m%s\u001B[39;00m\u001B[33m'\u001B[39m\u001B[33m did not.\u001B[39m\u001B[33m\"\u001B[39m)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\typed_passes.py:112\u001B[39m, in \u001B[36mBaseTypeInference.run_pass\u001B[39m\u001B[34m(self, state)\u001B[39m\n\u001B[32m    106\u001B[39m \u001B[38;5;250m\u001B[39m\u001B[33;03m\"\"\"\u001B[39;00m\n\u001B[32m    107\u001B[39m \u001B[33;03mType inference and legalization\u001B[39;00m\n\u001B[32m    108\u001B[39m \u001B[33;03m\"\"\"\u001B[39;00m\n\u001B[32m    109\u001B[39m \u001B[38;5;28;01mwith\u001B[39;00m fallback_context(state, \u001B[33m'\u001B[39m\u001B[33mFunction \u001B[39m\u001B[33m\"\u001B[39m\u001B[38;5;132;01m%s\u001B[39;00m\u001B[33m\"\u001B[39m\u001B[33m failed type inference\u001B[39m\u001B[33m'\u001B[39m\n\u001B[32m    110\u001B[39m                       % (state.func_id.func_name,)):\n\u001B[32m    111\u001B[39m     \u001B[38;5;66;03m# Type inference\u001B[39;00m\n\u001B[32m--> \u001B[39m\u001B[32m112\u001B[39m     typemap, return_type, calltypes, errs = type_inference_stage(\n\u001B[32m    113\u001B[39m         state.typingctx,\n\u001B[32m    114\u001B[39m         state.targetctx,\n\u001B[32m    115\u001B[39m         state.func_ir,\n\u001B[32m    116\u001B[39m         state.args,\n\u001B[32m    117\u001B[39m         state.return_type,\n\u001B[32m    118\u001B[39m         state.locals,\n\u001B[32m    119\u001B[39m         raise_errors=\u001B[38;5;28mself\u001B[39m._raise_errors)\n\u001B[32m    120\u001B[39m     state.typemap = typemap\n\u001B[32m    121\u001B[39m     \u001B[38;5;66;03m# save errors in case of partial typing\u001B[39;00m\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\typed_passes.py:93\u001B[39m, in \u001B[36mtype_inference_stage\u001B[39m\u001B[34m(typingctx, targetctx, interp, args, return_type, locals, raise_errors)\u001B[39m\n\u001B[32m     91\u001B[39m     infer.build_constraint()\n\u001B[32m     92\u001B[39m     \u001B[38;5;66;03m# return errors in case of partial typing\u001B[39;00m\n\u001B[32m---> \u001B[39m\u001B[32m93\u001B[39m     errs = infer.propagate(raise_errors=raise_errors)\n\u001B[32m     94\u001B[39m     typemap, restype, calltypes = infer.unify(raise_errors=raise_errors)\n\u001B[32m     96\u001B[39m \u001B[38;5;28;01mreturn\u001B[39;00m _TypingResults(typemap, restype, calltypes, errs)\n",
      "\u001B[36mFile \u001B[39m\u001B[32m~\\miniconda3\\envs\\bdp\\Lib\\site-packages\\numba\\core\\typeinfer.py:1074\u001B[39m, in \u001B[36mTypeInferer.propagate\u001B[39m\u001B[34m(self, raise_errors)\u001B[39m\n\u001B[32m   1071\u001B[39m force_lit_args = [e \u001B[38;5;28;01mfor\u001B[39;00m e \u001B[38;5;129;01min\u001B[39;00m errors\n\u001B[32m   1072\u001B[39m                   \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(e, ForceLiteralArg)]\n\u001B[32m   1073\u001B[39m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m force_lit_args:\n\u001B[32m-> \u001B[39m\u001B[32m1074\u001B[39m     \u001B[38;5;28;01mraise\u001B[39;00m errors[\u001B[32m0\u001B[39m]\n\u001B[32m   1075\u001B[39m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[32m   1076\u001B[39m     \u001B[38;5;28;01mraise\u001B[39;00m reduce(operator.or_, force_lit_args)\n",
      "\u001B[31mTypingError\u001B[39m: Failed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1m\u001B[1m\u001B[1mFailed in nopython mode pipeline (step: nopython frontend)\n\u001B[1m\u001B[1m\u001B[1mNo implementation of function Function(<built-in function iadd>) found for signature:\n \n >>> iadd(float32, array(float32, 2d, C))\n \nThere are 18 candidate implementations:\n\u001B[1m  - Of which 16 did not match due to:\n  Overload of function 'iadd': File: <numerous>: Line N/A.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match.\u001B[0m\n\u001B[1m  - Of which 2 did not match due to:\n  Operator Overload in function 'iadd': File: unknown: Line unknown.\n    With argument(s): '(float32, array(float32, 2d, C))':\u001B[0m\n\u001B[1m   No match for registered cases:\n    * (int64, int64) -> int64\n    * (int64, uint64) -> int64\n    * (uint64, int64) -> int64\n    * (uint64, uint64) -> uint64\n    * (float32, float32) -> float32\n    * (float64, float64) -> float64\n    * (complex64, complex64) -> complex64\n    * (complex128, complex128) -> complex128\u001B[0m\n\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of intrinsic-call at C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py (105)\u001B[0m\n\u001B[1m\nFile \"C:\\Users\\adadu\\miniconda3\\envs\\bdp\\Lib\\site-packages\\brainevent\\_csr_impl_float.py\", line 105:\u001B[0m\n\u001B[1m            def mv(weights, indices, indptr, vector, _, posts):\n                <source elided>\n                    for j in range(indptr[i], indptr[i + 1]):\n\u001B[1m                        posts[indices[j]] += weights[j] * sp\n\u001B[0m                        \u001B[1m^\u001B[0m\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[0m\u001B[1mDuring: resolving callee type: type(CPUDispatcher(<function _csrmv_numba_kernel_generator.<locals>.mv at 0x0000027093F46FC0>))\u001B[0m\n\u001B[0m\u001B[1mDuring: typing of call at  (8)\n\u001B[0m\n\u001B[1m\nFile \"D:\\codes\\projects\\BrainPy\\docs_version2\\tutorial_math\", line 8:\u001B[0m\n\u001B[1m<source missing, REPL/exec in use?>\u001B[0m\n\n\u001B[0m\u001B[1mDuring: Pass nopython_type_inference\u001B[0m"
     ]
    }
   ],
   "execution_count": 7
  },
  {
   "cell_type": "markdown",
   "id": "98bb1e7a",
   "metadata": {},
   "source": [
    "In which the non-zero connection values, the postsynaptic indices, the presynaptic neuron pointers, the presynaptic activity, and the connection shape (a tuple of the number of pre- and post-synaptic neurons) and whether to transpose the sparse matrix are passed to the fucntion. It returns the same results, but with a higher speed if the connection is sparse (we will see the comparison of their running time shortly)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "332a0b55",
   "metadata": {},
   "source": [
    "## Operators for event-driven synaptic computation"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6a136da",
   "metadata": {},
   "source": [
    "Above we have talked about the situation when the synaptic connection is sparse. What if the presynaptic events are also sparse? Theoretically, more time will be saved if we remove the redundant computation of the connection from inactive presynaptic neurons. In this condition, we can use event-driven operators in `brainpy.math.event`.\n",
    "\n",
    "Assume that in the above example, the presynaptic neuron 0 and 1 are firing and neuron 2 is inactive (now their acitivities are represented by boolean values):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "f5ae76ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "event = bm.array([True, True, False])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc37c1a4",
   "metadata": {},
   "source": [
    "The multiplication of the event vector and the connection matrix now becomes:\n",
    "\n",
    "<img src=\"../_static/event_driven_matrix_multiplication.png\" width=\"450 px\" align=\"center\">"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f583570",
   "metadata": {},
   "source": [
    "Of course we can use traditional matrix multiplication operators and the `brainpy.math.sparse.csrmv` operator for computation. However, by simply changing `brainpy.math.sparse.csrmv` to `brainpy.math.event.csrmv`, we can achieve event-driven synaptic computation and become even faster: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "d08121b0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Array([4., 0., 6., 2., 9.], dtype=float32)"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bm.event.csrmv(data, indices=indices, indptr=indptr, events=event, \n",
    "               shape=conn_mat.shape, transpose=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "957caeb1",
   "metadata": {},
   "source": [
    "Note that the parameter `vector` in `brainpy.math.sparse.csrmv` is revised to a boolean-type array `events` in `brainpy.math.event.csrmv`.\n",
    "\n",
    "Now let's compare the efficiency of traditional matrix operators, `brainpy.math.sparse.csrmv`, and `brainpy.math.event.csrmv`. To show a significant difference, a much larger network is used."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "c434d1f2",
   "metadata": {},
   "outputs": [],
   "source": [
    "from timeit import default_timer\n",
    "\n",
    "pre_num, post_num = 15000, 10000  # network size\n",
    "\n",
    "event = bm.random.bernoulli(p=0.15, size=pre_num)  # 15% of presynaptic neurons are active"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64b11a41",
   "metadata": {},
   "source": [
    "The traditional matrix operator is tested first:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "072d19b7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent for weight matrix generation: 4.558527099999992 s\n"
     ]
    }
   ],
   "source": [
    "# generate the weight matrix\n",
    "start = default_timer()\n",
    "\n",
    "connection_weights = bm.random.uniform(size=(pre_num, post_num))\n",
    "connection_weights[connection_weights < 0.8] = 0.  # sparse connection: 20%\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent for weight matrix generation: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "d2ce5337",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent for synaptic computation: 0.03688780000001657 s\n"
     ]
    }
   ],
   "source": [
    "# traditional matrix operator\n",
    "start = default_timer()\n",
    "\n",
    "post = bm.matmul(event, connection_weights)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent for synaptic computation: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "724bfc23",
   "metadata": {},
   "source": [
    "**Note that to exclude the time for JIT compilation, the code was run twice and the second result is displayed**. Though the running time for `brainpy.math.matmul` is acceptable, a huge amount of time is spent on weight matrix generation, which is extremely inefficient.\n",
    "\n",
    "For sparse matrix, using the CSR matrix structure to store data and to compute the product is more efficient:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "dfecf646",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent for connection generation: 1.61494689999995 s\n"
     ]
    }
   ],
   "source": [
    "# generate connection and store it in a CSR matrix\n",
    "start = default_timer()\n",
    "\n",
    "# define a connection with fixed connection probability by brainpy.conn.FixedProb\n",
    "connection = bp.conn.FixedProb(prob=0.2)\n",
    "# obtain these properties by .require('pre2post')\n",
    "indices, indptr = connection(pre_num, post_num).require('pre2post')\n",
    "data = bm.random.uniform(size=indices.shape)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent for connection generation: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "d11f8e79",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent for synaptic computation: 0.0551689000000124 s\n"
     ]
    }
   ],
   "source": [
    "# dedicated sparse operator\n",
    "pre_activity = event.astype(float)\n",
    "\n",
    "start = default_timer()\n",
    "\n",
    "post = bm.sparse.csrmv(data, indices=indices, indptr=indptr, vector=pre_activity, \n",
    "                       shape=(pre_num, post_num), transpose=True)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent for synaptic computation: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "457fa96c",
   "metadata": {},
   "source": [
    "Due to a large number of inactive presynaptic neurons, sparse connection matrix is not the optimal solution until event-driven computation is involved:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "213c43ad",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent for synaptic computation: 0.015149300000018684 s\n"
     ]
    }
   ],
   "source": [
    "start = default_timer()\n",
    "\n",
    "post = bm.event.csrmv(data, indices=indices, indptr=indptr, events=event, \n",
    "                      shape=(pre_num, post_num), transpose=True)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent for synaptic computation: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae59f95e",
   "metadata": {},
   "source": [
    "Now the running time is significantly lower than the other two operators.\n",
    "\n",
    "To summarize, for synaptic computation with a sparse connection matrix, operators in `brainpy.math.sparse` are recommended. Furthermore, if the presynaptic activity is also sparse, operators in `brainpy.math.event` are better choices."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "240aee1f",
   "metadata": {},
   "source": [
    "## Even faster synaptic computation with specific connection patterns"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9bae4bc1",
   "metadata": {},
   "source": [
    "The synaptic computation, the multiplication of a vector and a sparse matrix, though impressively accelerated by the event-driven and sparse operators in BrainPy, can be improved even further in some situations. \n",
    "\n",
    "When building a brain model, we often encounter situations like this: \n",
    "1. each presynaptic neuron is randomly connected with postsynaptic neurons in a fixed probability, and \n",
    "2. the connection weights obey certain rules, such as being a constant value or generated from a normal distribution. \n",
    "\n",
    "One solution is to generate the random connection and the corresponding weights and store them in a sparse matrix, and then use BrainPy's dedicated operators for synaptic computation. To further improve the performance, BrainPy provides another solution, which is to generate the connection information during synaptic computation, thusing saving the space to store connection and the time to interact with memory.\n",
    "\n",
    "Currently, BrainPy provides two types of such operators, one of which is regular and the other is event-driven. They are contained in `brainpy.math.jitconn`, which means the random connection is just-in-time generated during computation. Let's take as an example the event-driven operator to compute synaptic transmission when 1. the connection matrix is randomly generated with a fixed probability and 2. the conection weights are generated from a normal distribution."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35b504e8",
   "metadata": {},
   "source": [
    "Firstly, let's think of how this is achieved without any dedicated operators. Below are the related parameters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "6461bca0",
   "metadata": {},
   "outputs": [],
   "source": [
    "pre_num, post_num = 50000, 10000  # network size\n",
    "conn_prob = 0.1  # connection probability\n",
    "w_mu, w_sigma = 0.5, 1.  # parameters of the normal distribution"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d019b22d",
   "metadata": {},
   "source": [
    "And a vector representing presynaptic acitivty is also needed:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "92ecfc35",
   "metadata": {},
   "outputs": [],
   "source": [
    "event = bm.random.bernoulli(p=0.2, size=pre_num)  # 20% of presynaptic neurons release spikes"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b88136f1",
   "metadata": {},
   "source": [
    "To obtain a random connection with fixed probability, we need to generate a bool-type connection matrix. To obtain a normally distributed weight matrix, we should also generate a weight matrix with float numbers. Then we can compute the postsynaptic inputs by multiplying the event vector and the elementwise product of the connection and the weight matrix:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "b9a017ad",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent: 11.050910799999997 s\n"
     ]
    }
   ],
   "source": [
    "start = default_timer()\n",
    "\n",
    "# generate the connection matrix using brainpy.connect.FixedProb\n",
    "conn = bp.connect.FixedProb(prob=conn_prob)(pre_num, post_num)\n",
    "conn_matrix = conn.require('conn_mat')\n",
    "\n",
    "# generate the weight matrix\n",
    "weight_matrix = bm.random.normal(loc=w_mu, scale=w_sigma, size=(pre_num, post_num))\n",
    "\n",
    "# compute the product\n",
    "post = bm.matmul(event, weight_matrix * conn_matrix)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf89a99c",
   "metadata": {},
   "source": [
    "The total running time is about 10 seconds. Now let's try to use `brainpy.math.event.csrmv` to see how much improvement it makes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "3bf35974",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent: 4.693945999999983 s\n"
     ]
    }
   ],
   "source": [
    "start = default_timer()\n",
    "\n",
    "# generate the connection with the pre2post structure\n",
    "conn = bp.connect.FixedProb(prob=conn_prob)(pre_num, post_num)\n",
    "indices, indptr = conn.require('pre2post')\n",
    "\n",
    "# generate the weight matrix\n",
    "weights = bm.random.normal(loc=w_mu, scale=w_sigma, size=indices.shape[0])\n",
    "\n",
    "# compute the event-driven synaptic summation\n",
    "post = bm.event.csrmv(weights, indices, indptr, event, shape=(pre_num, post_num), \n",
    "                      transpose=True)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d899e91",
   "metadata": {},
   "source": [
    "Now the running time halves, but it is still unsatisfactory. Then we try to use a new operator, `brainpy.math.jitconn.event_mv_prob_normal`, which does not generate the connection and weights explicitly but achieves it while doing the matrix multiplication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "1cac037b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time spent: 0.0062690999999404085 s\n"
     ]
    }
   ],
   "source": [
    "start = default_timer()\n",
    "\n",
    "# the connection is generated during computation, \n",
    "# so the entire connection information is not required to be stored or accessed\n",
    "post = bm.jitconn.event_mv_prob_normal(event, w_mu=w_mu, w_sigma=w_sigma, conn_prob=conn_prob, \n",
    "                                       shape=(pre_num, post_num), transpose=True)\n",
    "\n",
    "duration = default_timer() - start\n",
    "\n",
    "print('time spent: {} s'.format(duration))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47561648",
   "metadata": {},
   "source": [
    "Surprisingly, the time spent is much less than the privous operations! Therefore, users are recommended to use these operators to multiply a vector and a sparse matrix if the situation is as follows:\n",
    "\n",
    "1. Use `brainpy.math.jitconn.mv_prob_homo()`, if the connection is of fixed probability, and the connection weight is a constant (a single value).\n",
    "2. Use `brainpy.math.jitconn.mv_prob_uniform()`, if the connection is of fixed probability, and the connection weight is sampled from a uniform distribution.\n",
    "3. Use `brainpy.math.jitconn.mv_prob_normal()`, if the connection is of fixed probability, and the connection weight is sampled from a normal distribution.\n",
    "\n",
    "Besides, there are three corresponding event-driven operators, `brainpy.math.jitconn.event_mv_prob_homo()`, `brainpy.math.jitconn.event_mv_prob_uniform()`, and `brainpy.math.jitconn.event_mv_prob_normal()`, for event-driven synaptic computation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4be8db84",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
