{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Function Decorators for Accelerated Code\n",
    "\n",
    "The idea is to provide a simple API for end users to interact with custom IP in the fabric, and provide a simple mechanism for overlay writers to expose that functionality to end users. The idea would be to have a decorator that marks a function as being potentially offloaded `@hardware_function(vlnv)` that handles all of the communication. The return type and argument type are then expressed using python type annotations. If the VLNV appears in the loaded bitstream then a wrapper will be returned that, upon accessing the data, will act like a numpy array of the specified type. If the VLNV is not in the block design, the function will be executed as per normal."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Representation of call chains\n",
    "The first task is to provide wrappers for the call chains which are being offloaded. This is taken wholesale from the test notebook. At the moment, it is assumed that all functions take one or more streams and input and return a single stream."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "class Wrapper:\n",
    "    def __init__(self, wrapped, dtype = np.int32):\n",
    "        self.wrapped = wrapped\n",
    "        self.dtype = dtype\n",
    "    def value(self):\n",
    "        return self.wrapped\n",
    "\n",
    "class Call:\n",
    "    def __init__(self, func, stream_args, scalar_args, return_type = np.uint32):\n",
    "        self.func = func\n",
    "        self.args = stream_args\n",
    "        self.scalar_args = scalar_args\n",
    "        self.dtype = return_type\n",
    "        self.cached = None\n",
    "\n",
    "    def value(self):\n",
    "        return self.func(*[a.value() for a in self.args])\n",
    "    \n",
    "    def hw_value(self):\n",
    "        return execute_hardware(self)\n",
    "    \n",
    "    def __str__(self):\n",
    "        if self.cached is None:\n",
    "            self.cached = self.hw_value()\n",
    "        return str(self.cached)\n",
    "    \n",
    "    def __getitem__(self, index):\n",
    "        if self.cached is None:\n",
    "            self.cached = self.hw_value()\n",
    "        return self.cached[index]\n",
    "    \n",
    "    def __len__(self):\n",
    "        if self.cached is None:\n",
    "            self.cached = self.hw_value()\n",
    "        return len(self.cached)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Determining whats in the bitstream\n",
    "In order to correctly wire up the switches in the bitstream, we need to extract from the TCL file what IP is in the diagram and how it is wired. This is future work so, for now, it is hard-coded to the example bitstream but this will be changed post proof-of-concept."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from collections import namedtuple\n",
    "\n",
    "Function = namedtuple('Function', 'in_ports out_ports name')\n",
    "\n",
    "class FunctionMetadata:\n",
    "    def __init__(self):\n",
    "        self.DMA = [([0],[0]),([5],[4])]\n",
    "        self.DMA_names = ['axi_dma_0', 'axi_dma_1']\n",
    "        self.functions = {}\n",
    "        #self.functions['Xilinx:hls:stream_double:1.0'] = Function(in_ports=[2],out_ports=[2],name=None)\n",
    "        self.functions['xilinx.com:user:design_1:1.0'] = Function(in_ports=[2],out_ports=[2],name=None)\n",
    "        self.functions['Xilinx:hls:stream_mult:1.0'] = Function(in_ports=[3,4],out_ports=[3],name=None)\n",
    "        #self.functions['xilinx.com:hls:wrapped_conv_hw:1.0'] = Function(in_ports=[3,4],out_ports=[3],name=None)\n",
    "        #self.functions['xilinx.com:hls:wrapped_conv_im2col_hw:1.0'] = Function(in_ports=[3,4],out_ports=[3],name=None)\n",
    "        #self.functions['xilinx.com:hls:wrapped:1.0'] = Function(in_ports=[3,4],out_ports=[3],name=None)\n",
    "        self.functions['Xilinx:hls:simple_sum:1.0'] = Function(in_ports=[1],out_ports=[1],name=None)\n",
    "        self.functions['Xilinx:hls:mult_constant:1.0'] = Function(in_ports=[6],out_ports=[5],name='mult_constant_0')\n",
    "        \n",
    "metadata = FunctionMetadata()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Controlling the switch\n",
    "The next helper class controls the switch by setting routes. It is a thin wrapper around the control interface of the Xilinx AXI Stream Switch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from pynq import PL\n",
    "from pynq import MMIO\n",
    "\n",
    "class StreamingSwitch:\n",
    "    def __init__(self, name):\n",
    "        base_addr = int(PL.ip_dict[\"SEG_{0}_Reg\".format(name)][0],16)\n",
    "        self.mmio = MMIO(base_addr, 256)\n",
    "        self.reset()\n",
    "        \n",
    "    def set_route(self, in_port, out_port):\n",
    "        #print('SWITCH: setting route {0} to {1}'.format(in_port, out_port))\n",
    "        self.mmio.write(0x40 + out_port * 4, in_port)\n",
    "        \n",
    "    def reset(self):\n",
    "        for i in range(16):\n",
    "            # Disable the output on every port\n",
    "            self.mmio.write(0x40 + i * 4, 0x80000000)\n",
    "    \n",
    "    def commit(self):\n",
    "        # Causes the switch to update atomically to the new routing\n",
    "        self.mmio.write(0, 2)\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Decorator\n",
    "Take a function and wrap it in a call object"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import inspect\n",
    "\n",
    "def wrap_arg(a, dtype=np.int32):\n",
    "    if type(a) is Call or type(a) is Wrapper:\n",
    "        return a\n",
    "    else:\n",
    "        # TODO: sort out element type\n",
    "        return Wrapper(a, dtype);\n",
    "\n",
    "def hardware_function(vlnv):\n",
    "    def decorator(func):\n",
    "        sig = inspect.signature(func)\n",
    "        ret_type = sig.return_annotation[0]\n",
    "        def wrapped_function(*args, **kwargs):\n",
    "            ba = sig.bind(*args, **kwargs)\n",
    "            if vlnv in metadata.functions:\n",
    "                stream_args = []\n",
    "                scalar_args = []\n",
    "                for param in sig.parameters.values():\n",
    "                    if type(param.annotation) is list:\n",
    "                        stream_args.append(wrap_arg(ba.arguments[param.name], param.annotation[0]))\n",
    "                    else:\n",
    "                        scalar_args.append(ba.arguments[param.name])\n",
    "                return Call(vlnv, stream_args, scalar_args, return_type=ret_type)\n",
    "            else:\n",
    "                # We don't have the function available so we might\n",
    "                # as well just call the function and return\n",
    "                return func(*args, **kwargs)\n",
    "        return wrapped_function\n",
    "    return decorator"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configuring the Switch and DMA\n",
    "The final step is to take a Call object and configure the switch accordingly. This process should also prime the DMA with the correct to be sent. We need a mechanism to set the correct size of the receiving buffer, thoughts welcome."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Horrible hack to load the DMA driver\n",
    "from pynq import Overlay\n",
    "Overlay('base.bit').download()\n",
    "from pynq.drivers import DMA\n",
    "import pynq.drivers.dma\n",
    "#Overlay('/home/xilinx/decorator_test.bit').download()\n",
    "#Overlay('/home/xilinx/decorator_conv.bit').download()\n",
    "Overlay('/home/xilinx/jupyter_notebooks/PYNQ_CNN/Theano/Lenet/Bitstream/decorator_lenet_full.bit').download()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Wrap the DMA\n",
    "Provide a simple API to the DMA. The DMA engine out to be separated out into a separate buffer as proposed separately then the DMA engine instances can be static and buffers could be returned without being copied."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class DMAWrapper:\n",
    "    def __init__(self,index):\n",
    "        #print('Send DMA: create index {0} name {1}'.format(index, metadata.DMA_names[index]))\n",
    "        base_addr = int(PL.ip_dict[\"SEG_{0}_Reg\".format(metadata.DMA_names[index])][0],16)\n",
    "        #print('Send DMA: base_address {0:x}'.format(base_addr))\n",
    "        self.dma = DMA(base_addr, 0)\n",
    "        self.ports = metadata.DMA[index]\n",
    "        \n",
    "    def set_data(self, data, dtype):\n",
    "        self.length = len(data) * dtype.itemsize\n",
    "        #print('Send DMA: sending {0} bytes'.format(self.length))\n",
    "        self.dma.create_buf(self.length)\n",
    "        ffi = pynq.drivers.dma.ffi\n",
    "        buf = ffi.buffer(self.dma.buf, self.length)\n",
    "        view = np.frombuffer(buf, dtype, -1)\n",
    "        np.copyto(view, data, casting='same_kind')\n",
    "\n",
    "    def transfer(self):\n",
    "        #print('Send DMA: transfer started')\n",
    "        self.dma.transfer(self.length, 0)\n",
    "    \n",
    "    def wait(self):\n",
    "        self.dma.wait()\n",
    "        #print('Send DMA: transfer finished')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Parse the execution plan\n",
    "Next a recursive function is used to walk the execution plan. At the moment, there is no protection against using a function multiple times in a plan. That will follow later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def prepare_execution(plan, dma, return_port):\n",
    "    if type(plan) is Wrapper:\n",
    "        d = DMAWrapper(len(dma))\n",
    "        d.set_data(plan.wrapped, plan.dtype())\n",
    "        dma.append(d)\n",
    "        hw_switch.set_route(d.ports[1][0], return_port)\n",
    "    elif type(plan) is Call:\n",
    "        in_ports = metadata.functions[plan.func].in_ports\n",
    "        out_ports = metadata.functions[plan.func].out_ports\n",
    "        name = metadata.functions[plan.func].name\n",
    "        mmio = None\n",
    "        if name:\n",
    "            mmio = MMIO(int(PL.ip_dict['SEG_{0}_Reg'.format(name)][0],16),256)\n",
    "        for i, a in enumerate(plan.args):\n",
    "            prepare_execution(a, dma, in_ports[i])\n",
    "        for i, a in enumerate(plan.scalar_args):\n",
    "            mmio.write(0x10 + 4*i, a)\n",
    "        hw_switch.set_route(out_ports[0], return_port)\n",
    "    else:\n",
    "        print(\"Unknown plan type: \" + repr(plan))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Execute the plan\n",
    "This is the main function that executes the plan. It first calls the parsing functions, then configures the input DMA engineswith suitable buffers and then waits for the return DMA to complete. Because the return buffer belongs to the DMA engine, a copy has to be taken. This can be changed with a modified DMA API"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "hw_switch = StreamingSwitch('axis_switch_0')\n",
    "\n",
    "def execute_hardware(plan):\n",
    "    dma = []\n",
    "    hw_switch.reset()\n",
    "    ret_dma_base = int(PL.ip_dict[\"SEG_{0}_Reg\".format(metadata.DMA_names[0])][0],16)\n",
    "    ret_dma_mmio = MMIO(ret_dma_base, 256)\n",
    "    ret_dma = DMA(ret_dma_base, 1)\n",
    "    # TODO: Metadata for how big the buffer should be?\n",
    "    ret_dma.create_buf(8388607)\n",
    "    prepare_execution(plan, dma, metadata.DMA[0][0][0])\n",
    "    hw_switch.commit()\n",
    "    ret_dma.transfer(8388607, 1)\n",
    "    for d in dma:\n",
    "        d.transfer()\n",
    "    for d in dma:\n",
    "        d.wait()\n",
    "    ret_dma.wait()\n",
    "    bytes_read = ret_dma_mmio.read(0x58)\n",
    "    #print(bytes_read)\n",
    "    ffi = pynq.drivers.dma.ffi\n",
    "    buf = ffi.buffer(ret_dma.buf, bytes_read)\n",
    "    view = np.frombuffer(buf, plan.dtype, -1).copy()\n",
    "    return view"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Testing the Decorator\n",
    "Create some simple functions which map to the hardware functions and see if the decorator maps accordingly. We'll add some print statements to the python versions of the functions so we can make sure they're not called"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "@hardware_function('Xilinx:hls:simple_sum:1.0')\n",
    "def total(vs:[np.int32]) -> [np.int32]:\n",
    "    print(\"In total\")\n",
    "    return sum(vs)\n",
    "\n",
    "@hardware_function('xilinx.com:user:design_1:1.0')\n",
    "#@hardware_function('Xilinx:hls:stream_double:1.0')\n",
    "def double(vs:[np.int32]) -> [np.int32]:\n",
    "    print(\"In double\")\n",
    "    return [v * 2 for v in vs]\n",
    "\n",
    "@hardware_function('Xilinx:hls:stream_mult:1.0')\n",
    "#@hardware_function('xilinx.com:hls:wrapped_conv_hw:1.0')\n",
    "#@hardware_function('xilinx.com:hls:wrapped_conv_im2col_hw:1.0')\n",
    "#@hardware_function('xilinx.com:hls:wrapped:1.0')\n",
    "def mult(a:[np.int32], b:[np.int32]) -> [np.int32]:\n",
    "    return [a1 * b1 for (a1,b1) in zip(a,b)]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First we chain two hardware functions together. Note that no computation happens at this point as we don't know if the user wants this value or plans to use it as an intermediate value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "ename": "TimeoutError",
     "evalue": "DMA wait timed out.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mTimeoutError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-11-8d6e54d90b74>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     38\u001b[0m \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mKerDim_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mIFMCH_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mIFMDim_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mOFMCH_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mOFMDim_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mPadDim_1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m   \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m   \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m   \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     39\u001b[0m \u001b[0mker_param_1\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdouble\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkernel_1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 40\u001b[0;31m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mker_param_1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     41\u001b[0m \u001b[0;31m#ker_param_2 = double(kernel_2)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     42\u001b[0m \u001b[0;31m#print(ker_param_2)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m<ipython-input-1-bf7c1ff480c4>\u001b[0m in \u001b[0;36m__str__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     24\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m__str__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     25\u001b[0m         \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcached\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 26\u001b[0;31m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcached\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhw_value\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     27\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcached\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     28\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m<ipython-input-1-bf7c1ff480c4>\u001b[0m in \u001b[0;36mhw_value\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     20\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     21\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mhw_value\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 22\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mexecute_hardware\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     23\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     24\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m__str__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m<ipython-input-8-1c469e1c9cda>\u001b[0m in \u001b[0;36mexecute_hardware\u001b[0;34m(plan)\u001b[0m\n\u001b[1;32m     16\u001b[0m     \u001b[0;32mfor\u001b[0m \u001b[0md\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mdma\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     17\u001b[0m         \u001b[0md\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwait\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 18\u001b[0;31m     \u001b[0mret_dma\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwait\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     19\u001b[0m     \u001b[0mbytes_read\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mret_dma_mmio\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0x58\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     20\u001b[0m     \u001b[0;31m#print(bytes_read)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/usr/local/lib/python3.4/dist-packages/pynq/drivers/dma.py\u001b[0m in \u001b[0;36mwait\u001b[0;34m(self, wait_timeout)\u001b[0m\n\u001b[1;32m    439\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mseconds\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mwait_timeout\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0merror_message\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mError\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    440\u001b[0m             \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 441\u001b[0;31m                 \u001b[0;32mif\u001b[0m \u001b[0mlibdma\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mXAxiDma_Busy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mDMAengine\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdirection\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    442\u001b[0m                     \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    443\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/usr/local/lib/python3.4/dist-packages/pynq/drivers/dma.py\u001b[0m in \u001b[0;36mhandle_timeout\u001b[0;34m(self, signum, frame)\u001b[0m\n\u001b[1;32m    173\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    174\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mhandle_timeout\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msignum\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mframe\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 175\u001b[0;31m         \u001b[0;32mraise\u001b[0m \u001b[0mTimeoutError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0merror_message\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    176\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    177\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m__enter__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mTimeoutError\u001b[0m: DMA wait timed out."
     ]
    }
   ],
   "source": [
    "## Test 1: One Channel, One Output Channel, No Padding\n",
    "\n",
    "batch_size = 1\n",
    "KerDim_1 = 3\n",
    "IFMCH_1 = 1\n",
    "IFMDim_1 = 8\n",
    "OFMCH_1 = 1\n",
    "OFMDim_1 = 8\n",
    "PadDim_1 = 1\n",
    "\n",
    "KerDim_2 = 3\n",
    "IFMCH_2 = 1\n",
    "IFMDim_2 = 4\n",
    "OFMCH_2 = 1\n",
    "OFMDim_2 = 4\n",
    "PadDim_2 = 1\n",
    "\n",
    "KerDim_3 = 2\n",
    "IFMCH_3 = 1\n",
    "IFMDim_3 = 2\n",
    "OFMCH_3 = 1\n",
    "OFMDim_3 = 1\n",
    "PadDim_3 = 0\n",
    "\n",
    "KerDim_4 = 1\n",
    "IFMCH_4 = 1\n",
    "IFMDim_4 = 1\n",
    "OFMCH_4 = 1\n",
    "OFMDim_4 = 1\n",
    "PadDim_4 = 0\n",
    "\n",
    "\n",
    "\n",
    "kernel_1 = [1, batch_size, KerDim_1, IFMCH_1, IFMDim_1, OFMCH_1, OFMDim_1, PadDim_1, 0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
    "kernel_2 = [2, batch_size, KerDim_2, IFMCH_2, IFMDim_2, OFMCH_2, OFMDim_2, PadDim_2, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
    "kernel_3 = [3, batch_size, KerDim_3, IFMCH_3, IFMDim_3, OFMCH_3, OFMDim_3, PadDim_3, 1, 1, 1, 0]\n",
    "kernel_4 = [4, batch_size, KerDim_4, IFMCH_4, IFMDim_4, OFMCH_4, OFMDim_4, PadDim_4, 1]\n",
    "input = [0, batch_size, KerDim_1, IFMCH_1, IFMDim_1, OFMCH_1, OFMDim_1, PadDim_1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,   1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,   1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,   1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0]\n",
    "ker_param_1 = double(kernel_1)\n",
    "print(ker_param_1)\n",
    "#ker_param_2 = double(kernel_2)\n",
    "#print(ker_param_2)\n",
    "#output = double(input)\n",
    "#print(output)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 1  2  3  1  4  1  4  1 -1 -1 -1 -1 -1 -1 -1 -1  0]\n",
      "[0 2 3 1 4 1 4 1 2 3 4 4 2 5 7 5 1 4 6 4 2 4 4 2 2 3 4 4 2 5 7 5 1 4 6 4 2\n",
      " 4 4 2]\n"
     ]
    }
   ],
   "source": [
    "## Test 2: One Input Channel, One Output Channel, With Padding\n",
    "\n",
    "batch_size = 2\n",
    "KerDim_1 = 2\n",
    "IFMCH_1 = 1\n",
    "IFMDim_1 = 4\n",
    "OFMCH_1 = 1\n",
    "OFMDim_1 = 3\n",
    "PadDim_1 = 0\n",
    "\n",
    "KerDim_2 = 2\n",
    "IFMCH_2 = 1\n",
    "IFMDim_2 = 3\n",
    "OFMCH_2 = 1\n",
    "OFMDim_2 = 2\n",
    "PadDim_2 = 0\n",
    "\n",
    "kernel_1 = [1, batch_size, KerDim_1, IFMCH_1, IFMDim_1, OFMCH_1, OFMDim_1, PadDim_1, 1, 1, 0, 1]\n",
    "kernel_2 = [2, batch_size, KerDim_2, IFMCH_2, IFMDim_2, OFMCH_2, OFMDim_2, PadDim_2, 1, 1, 0, 1]\n",
    "input = [0, batch_size, KerDim_1, IFMCH_1, IFMDim_1, OFMCH_1, OFMDim_1, PadDim_1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,   1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0]\n",
    "ker_param_1 = double(kernel_1)\n",
    "print(ker_param_1)\n",
    "ker_param_2 = double(kernel_2)\n",
    "print(ker_param_2)\n",
    "output = double(input)\n",
    "print(output)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 1  1  3  2  4  1  4  1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1\n",
      " -1]\n",
      "[  0   1   3   2   4   1   4   1  -8 -12 -12  -8 -12 -18 -18 -12 -12 -18\n",
      " -18 -12  -8 -12 -12  -8]\n"
     ]
    }
   ],
   "source": [
    "## Test 3: Two Input Channels, One Output Channel, With Padding\n",
    "\n",
    "batch_size = 1\n",
    "KerDim = 3\n",
    "IFMCH = 2\n",
    "IFMDim = 4\n",
    "OFMCH = 1\n",
    "OFMDim = 4\n",
    "PadDim = 1\n",
    "\n",
    "kernel = [1, batch_size, KerDim, IFMCH, IFMDim, OFMCH, OFMDim, PadDim, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]\n",
    "input = [0, batch_size, KerDim, IFMCH, IFMDim, OFMCH, OFMDim, PadDim, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
    "ker_param = double(kernel)\n",
    "print(ker_param)\n",
    "output = double(input)\n",
    "print(output)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1 0 1 ..., 0 1 0]\n",
      "[1 0 1 ..., 0 1 0]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 82,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "KerDim = 5\n",
    "IFMCH = 3\n",
    "IFMDim = 32\n",
    "OFMCH = 192\n",
    "OFMDim = 32\n",
    "PadDim = 2\n",
    "\n",
    "# random generate input and kernel\n",
    "input_mat = np.random.randint(2, size=(IFMCH,IFMDim,IFMDim))\n",
    "kernel_mat = np.random.randint(2, size=(OFMCH,IFMCH,KerDim,KerDim))\n",
    "\n",
    "# from input generate input stream\n",
    "#for k in range(IFMDim):\n",
    "#    for j in range(IFMDim):\n",
    "#        for i in range(IFMCH):\n",
    "#            input_val2[k*IFMDim*IFMCH + j*IFMCH + i] = input_mat[i][j][k]\n",
    "            \n",
    "input_mat = input_mat.transpose(2,1,0)\n",
    "input_val = input_mat.ravel()\n",
    "\n",
    "# from kernel generate kernel stream\n",
    "#for n in range(OFMCH):\n",
    "#    for m in range(KerDim):\n",
    "#        for j in range(KerDim):\n",
    "#            for i in range(IFMCH):\n",
    "#                kernel_val2[n*IFMCH*KerDim*KerDim + m*KerDim*IFMCH + j*IFMCH + i] = kernel_mat[n][i][j][m]\n",
    "            \n",
    "kernel_mat = kernel_mat.transpose(0,3,2,1)\n",
    "kernel_val = kernel_mat.ravel()\n",
    "\n",
    "\n",
    "print(kernel_val)\n",
    "print(kernel_val2)\n",
    "\n",
    "np.array_equal(kernel_val, kernel_val2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "## Image Test 1: 1st layer of NIN\n",
    "\n",
    "batch_size = 40\n",
    "KerDim = 5\n",
    "IFMCH = 96\n",
    "IFMDim = 16\n",
    "OFMCH = 192\n",
    "OFMDim = 16\n",
    "PadDim = 2\n",
    "\n",
    "# random generate input and kernel\n",
    "input_mat = np.random.randint(2, size=(batch_size, IFMCH,IFMDim,IFMDim)) - 1\n",
    "kernel_mat = np.random.randint(2, size=(OFMCH,IFMCH,KerDim,KerDim)) - 1\n",
    "\n",
    "#input_mat = np.zeros((IFMCH,IFMDim,IFMDim)) - 1\n",
    "#input_mat = input_mat.astype(int)\n",
    "#kernel_mat = np.random.randint(2, size=(OFMCH,IFMCH,KerDim,KerDim))\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 1 40  5 ...,  0 -1 -1]\n",
      "CPU times: user 4.52 s, sys: 3.07 s, total: 7.59 s\n",
      "Wall time: 3.8 s\n",
      "output_fpga [233 225 219 ..., 221 222 231]\n"
     ]
    }
   ],
   "source": [
    "## from input and kernel generate data stream\n",
    "\n",
    "#input_mat = input_mat.transpose(2,1,0)\n",
    "input_val = input_mat.transpose(0,2,3,1).ravel()\n",
    "#kernel_mat = kernel_mat.transpose(0,3,2,1)\n",
    "kernel_val = kernel_mat.transpose(0,2,3,1).ravel()\n",
    "\n",
    "input = np.append([0, batch_size, KerDim, IFMCH, IFMDim, OFMCH, OFMDim, PadDim], input_val)\n",
    "kernel = np.append([1, batch_size, KerDim, IFMCH, IFMDim, OFMCH, OFMDim, PadDim], kernel_val)\n",
    "\n",
    "ker_param = double(kernel)\n",
    "print(ker_param)\n",
    "#print('input', input_mat)\n",
    "#print('kernel', kernel_mat)\n",
    "\n",
    "# time\n",
    "%time output_fpga = double(input).hw_value()\n",
    "print('output_fpga', output_fpga[8:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 4.25 s, sys: 3.22 s, total: 7.47 s\n",
      "Wall time: 3.89 s\n",
      "CPU times: user 20 s, sys: 50 ms, total: 20 s\n",
      "Wall time: 10.4 s\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "## CPU performance\n",
    "\n",
    "import im2col_lasagne_cython\n",
    "%time im2col_mat = im2col_lasagne_cython.im2col_lasagne_cython(input_mat.astype(float),KerDim,KerDim,PadDim,1)\n",
    "kernel_mat_2D = kernel_mat.reshape(OFMCH, -1)\n",
    "\n",
    "%time output_cpu = np.matmul(kernel_mat_2D, im2col_mat)\n",
    "output_cpu = output_cpu.reshape(OFMCH, OFMDim, OFMDim, batch_size)\n",
    "output_cpu = output_cpu.transpose(3, 0, 1, 2)\n",
    "\n",
    "output_fpga_mat = output_fpga[8:].reshape(batch_size, OFMDim*OFMDim, OFMCH) #(OFMDim*OFMDim, OFMCH)\n",
    "output_fpga_mat = output_fpga_mat.transpose(0, 2, 1)\n",
    "output_fpga_mat = output_fpga_mat.reshape(batch_size, OFMCH, OFMDim, OFMDim)\n",
    "\n",
    "np.array_equal(output_fpga_mat, output_cpu)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By calling print, we trigger the execution and the value is return"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 50 ms, sys: 10 ms, total: 60 ms\n",
      "Wall time: 34.1 ms\n",
      "CPU times: user 80 ms, sys: 90 ms, total: 170 ms\n",
      "Wall time: 88.7 ms\n",
      "(65536,)\n"
     ]
    }
   ],
   "source": [
    "A_COL = 1024\n",
    "A_ROW = 200\n",
    "B_COL = 64\n",
    "B_ROW = A_ROW\n",
    "\n",
    "A = np.ones((A_COL,A_ROW))\n",
    "B = np.ones((B_ROW,B_COL))\n",
    "%time np.matmul(A,B)\n",
    "\n",
    "A = np.append([A_COL,A_ROW],np.ones((1,A_COL*A_ROW)))\n",
    "B = np.append([B_COL,B_ROW],np.ones((1,B_COL*B_ROW)))\n",
    "%time mult(A, B).hw_value()\n",
    "print(np.shape(mult(A, B).hw_value()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Send DMA: create index 0 name axi_dma_0\n",
      "Send DMA: base_address 40400000\n",
      "Send DMA: sending 307208 bytes\n",
      "SWITCH: setting route 0 to 3\n",
      "Send DMA: create index 1 name axi_dma_1\n",
      "Send DMA: base_address 40410000\n",
      "Send DMA: sending 9608 bytes\n",
      "SWITCH: setting route 4 to 4\n",
      "SWITCH: setting route 3 to 1\n",
      "SWITCH: setting route 1 to 0\n",
      "Send DMA: transfer started\n",
      "Send DMA: transfer started\n",
      "Send DMA: transfer finished\n",
      "Send DMA: transfer finished\n",
      "4\n",
      "[0]\n"
     ]
    }
   ],
   "source": [
    "#print(t)\n",
    "#tmp = t.hw_value() + 3\n",
    "print(total(mult(A,B)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Because we never stored the intermediate value, if the user later requests it, we would need to redo the computation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Send DMA: create index 0 name axi_dma_0\n",
      "Send DMA: base_address 40400000\n",
      "Send DMA: sending 152 bytes\n",
      "SWITCH: setting route 0 to 3\n",
      "Send DMA: create index 1 name axi_dma_1\n",
      "Send DMA: base_address 40410000\n",
      "Send DMA: sending 108 bytes\n",
      "SWITCH: setting route 4 to 4\n",
      "SWITCH: setting route 3 to 0\n",
      "Send DMA: transfer started\n",
      "Send DMA: transfer started\n",
      "Send DMA: transfer finished\n",
      "Send DMA: transfer finished\n",
      "[ 9 12 15 15 12  9 12 16 20 20 16 12 15 20 26 26 21 16 15 20 26 26 21 16 12\n",
      " 16 21 21 17 13  9 12 16 16 13 10]\n"
     ]
    }
   ],
   "source": [
    "print(inter)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our hardware also contains a block that multiplies by a constant. The constant is passed in using the AXI-lite interface."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Send DMA: create index 0 name axi_dma_0\n",
      "Send DMA: base_address 40400000\n",
      "Send DMA: sending 28 bytes\n",
      "SWITCH: setting route 0 to 6\n",
      "SWITCH: setting route 5 to 0\n",
      "Send DMA: transfer started\n",
      "Send DMA: transfer finished\n",
      "28\n",
      "[ 5 10 15 20 25 30 35]\n"
     ]
    }
   ],
   "source": [
    "@hardware_function('Xilinx:hls:mult_constant:1.0')\n",
    "def constant_multiply(in_data:[np.int32], constant:np.int32) -> [np.int32]:\n",
    "    return [v * constant for v in in_data]\n",
    "\n",
    "print(constant_multiply([1,2,3,4,5,6,7], 5))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As `constant_multiple` is a python function like any other, we can also do function-y things to it. For example, we can use the `functools` library to partially apply the constant, giving us a new implementation of `double` in terms of `constant_multiply`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'vals' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-14-b80cb0d1b9bf>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      2\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      3\u001b[0m \u001b[0mnew_double\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mfunctools\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpartial\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mconstant_multiply\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconstant\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnew_double\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmult\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mvals\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mvals2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      5\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mNameError\u001b[0m: name 'vals' is not defined"
     ]
    }
   ],
   "source": [
    "import functools\n",
    "\n",
    "new_double = functools.partial(constant_multiply, constant=2)\n",
    "print(new_double(mult(vals,vals2)))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## Open Problems\n",
    "* Allocation of receive buffer\n",
    "* Data bigger than buffer size - SG may be able to help here\n",
    "* 0-length arrays - AXI4-Stream has no concept of a 0-length stream. Maybe a word with no strb bits?\n",
    "* Current wrapper logic is patchy at best but completely proxying a python object is non-trivial\n",
    "\n",
    "## Possible features\n",
    "* Plan partitioning for plans with more Calls than execution units/DMA engines\n",
    "* Re-use of intermediate values\n",
    "* I/O functions which configure the switch to route I/O directly\n",
    "* AXI-Master HLS support\n",
    "\n",
    "## Performance considerations\n",
    "* Need a way for users to CMA alloc a numpy array\n",
    "* Buffers not bound to DMA so that any CMA allocated buffer can be passed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.4.3+"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
