\documentclass[pageno]{jpaper}

%replace XXX with the submission number you are given from the ASPLOS submission site.
\newcommand{\asplossubmissionnumber}{41}

\usepackage{amsmath}
\usepackage{varwidth}
\usepackage[normalem]{ulem}
\usepackage{graphicx}

\begin{document}

\title{
Wenquxing 22A: A Highly Efficient Neuromorphic Accelerator by RISC-V Customized Instruction Extension for Spiking Neural Network (RV-SNN 1.0), Streamlined LIF Model and Binary Stochastic STDP
}

\date{}
\maketitle

\thispagestyle{empty}

\begin{abstract}

In recent decades, Spiking Neural Network (SNN) becomes an optimal choice in hardware implementation in AI because of its energy efficiency. Recent works are focusing on accelerating SNN computing. However, most accelerator solutions are based on CPU-accelerator architecture which is energy-inefficient due to the complex control flows in this structure. 

This paper proposes Wenquxing 22A, a neuromorphic processor to efficiently compute SNN with RISC-V extension instructions. The main idea of Wenquxing 22A is to integrate the SNN computing unit into the pipeline of a generic processor to achieve neuromorphic computing with customized RISC-V SNN instruction extensions 1.0 (RV-SNN 1.0). This design uses an in-order RISC-V processor as its baseline which is improved by extending the executive stage with an SNN computing unit. To integrate the Leaky Integrate-and-Fire (LIF) model into the in-order processor, we prune the complex traditional LIF model called the streamlined LIF model and apply it to the pipeline. Besides that, the binary stochastic Spike-timing-dependent-plasticity (STDP) with binary synaptic weights is also proposed to achieve the power efficiency of Wenquxing 22A. The recognition of the MNIST dataset is applied to Wenquxing 22A to make a comparison with other SNN systems. 

The experiment results show that working in 300MHz with pure 1-bit 2-layer SNN, the effective peak power efficiency could reach 2.4 TSOPS/W (Tera Synaptic Operations per Second per Watt) and the area efficiency of Wenquxing 22A reaches 339.9 SOP/LUT (Synaptic Operations per LUT), which both significantly exceed the published data from existing research on SNN accelerators. At the same time, with the binary synaptic weights, Wenquxing 22A reaches the peak classification accuracy which is 95.75\%, slightly exceeding the published data from existing research on SNN accelerators.

Besides, we opened the source code of Wenquxing 22A\footnote{GitHub: https://github.com/openmantianxing/Wenquxing22A}\footnote{Gitee: https://gitee.com/openmantianxing/wenquxing22a}.

\end{abstract}

\section{Introduction}
With the development of bio-inspired computing, researchers are seeking a better way to imitate brain behaviors from neuroscience observation. The very first step is Artificial Neuron Network (ANN), an important concept of AI \cite{narayanan2020spinalflow}. Its application ranges from computer versions to sensory data processing. A large range of AI computing is realized by CPU or GPU. The hardware processes complex information which is difficult to process for human beings \cite{lee2021neuroengine} in these fields.  While researchers are pursuing AI's computing performance, the neuron network's low-power performance is also important. Thus, the Spiking Neural Network (SNN), classified as the third generation of neural network models, becomes an optimal choice. The SNN has more bio-inspired features to imitate the low-power computing of brains and it is computationally more powerful than other neural network models \cite{ghosh2009spiking}\cite{lee2018flexon}\cite{maass1997networks}\cite{goodman2008brian}. SNN uses the event-based model to imitate biological neurons with minimal energy. Recent works in SNN architecture design \cite{pedroni2016forward}\cite{jokar2016digital}\cite{nouri2017digital}\cite{lammie2018unsupervised}\cite{suri2013bio} show the potential of matching the accuracy of other non-spiking ANNs. 

The traditional convolutional neurons require many operations, e.g., accumulating and multiplying every input value with a weight. It is demonstrated that those arches have efficient implementations on FPGAs through massive work \cite{jameil2022efficient}. Besides that, the event-based SNNs avoid lots of unnecessary calculations by only computing retrieved events. Outside sparse and dynamic events are assumed. This sparsity provides time-based or order-based input data and excludes time-wasting memory access. Related works such as  IBM TrueNorth, Intel Loihi, and Frenkel’s ODIN, have shown that processing event-based data by SNN can be efficient in both training and inference. However, these neuromorphic processors still have much power consumption.

In this work, a 9-stage inorder RISC-V processor, Nutshell, which is designed by the University of Chinese Academy of Science OSCPU (Open-Source Chip Project by University) team \cite{nutshell}, is chosen as the baseline of our processor. RISC-V SNN instruction extensions are implemented into its pipeline. NutShell features an RV64IMACSU instruction set and is capable of running Debien.

\begin{figure*}[!htbp]
	\centering
	\includegraphics[width=6in]{fig/workflw.pdf}
	\caption{ The Working Flow of Updating Neurons at the Same Level of SNN in Wenquxing 22A.}\label{fig-workflw}
\end{figure*}

The customized RISC-V-based SNN instruction set has high computational granularity to prevent the pipeline from being stalled for a long time due to the execution of one instruction. The streamlined Leaky Integrate-and-Fire (LIF) model and the order-based Binary Stochastic STDP (BS-STDP) are also utilized to compute SNNs. The experiments demonstrate that the neuron and synaptic models are both hardware-friendly.

Here are our contributions to achieving in-pipeline SNN computing:
\begin{itemize}
\item We design the customized SNN extension instruction set in high computational granularity based on RISC-V ISA;
\item The streamlined LIF neuron unit is proposed to integrate the pruned LIF model into the pipeline;
\item A synapse updating unit for Binary Stochastic STDP is utilized to gear to the iteration of binary synaptic weights.
\item The source code of Wenquxing 22A is released for further improvement and future work.
\end{itemize}

  In addition, to evaluate Wenquxing 22A, our design is implemented to the Alveo U250 FPGA platform to classify the MNIST dataset using RV-SNN instructions and to evaluate the power consumption and recognition accuracy of the SNN on Wenquxing 22A. Another low-power spiking accelerator, ODIN \cite{frenkel20180}, is also applied to the same FPGA platform as the contrast of energy comparison. The result turns out that Wenquxing 22A has a maximum recognition accuracy of 95.75\% which is higher than ODIN when its online learning is active according to \cite{frenkel20180} and improves the power consumption by 5.13 times over the accelerator solution of ODIN in our experiment.



\section{Backgrounds}
This section mainly introduces the flexibility of RISC-V ISA and two important conceptions of SNN computing: the neuron model and synaptic learning rule.



\subsection{Customized Instruction Extensions of RISC-V ISA}

In recent decades, RISC-V architecture is becoming popular in new computational systems. Since RISC-V delivers flexible performance, users are allowed to produce a custom system to suit their computing purposes \cite{greengard2020will}. RISC-V ISA (Instruction Set Architecture) is designed in a modular way. It means that this ISA has various subsets of instruction extensions that can be implemented optionally depending on the actual needs. For instance, RISC-V provides a series of standard instruction extensions on top of the essential Integer, Multiply, Float Double precision, Atomic \cite{waterman2011risc}, \emph{etc}. All those extensions can be combined and described as RV32IMAFD or RV32G.

More importantly, since RISC-V has an open-source and standard modular expansion philosophy, some instruction extensions for special purposes can be used. These special-purpose instructions will be fully discussed and evaluated before the final instructions are decided. In addition, the extended instruction sets will not change the basis architect ture of the original instruction modules, which ensures the flexibility of RISC-V ISA extensions in the aspect of the hardware implementation. Considering the advantage mentioned above, RISC-V is extension-native, especially for purpose-oriented special instruction extensions.

\subsection{Streamlined Neuron Model}
The classic leaky Integrate-and-Fire (LIF) model can be mathematically expressed as:

\begin{equation}
\tau\frac{dv}{dt}=v_0-v+I \label{fm-tdlif}
\end{equation}

where $v$ and $I$ are the membrane potential and the input spike
current respectively. In every update cycle, if $v$ is greater
than the threshold, the neuron will fire and return its
membrane potential to $v_0$.

Figure \ref{fig-lif} illustrates a multi-input LIF neuron. Whether the LIF neuron receives input spikes depends on the synaptic weight of this neuron.

\begin{figure}[!htbp]
\centering
\includegraphics[width=3.5in]{fig/lif.pdf}
\caption{A LIF Model with Multi-input.}
\label{fig-lif}
\end{figure}

The traditional LIF model needs to be pruned to integrate this neuron model into the pipeline. According to the LIF model described above, it is driven by an arbitrary time-dependent input current $I$. However, it is very difficult to implement the classic LIF model in the pipeline when considering the time parameter because the computing process will block the pipeline in the in-order processor. To avoid this situation and to achieve the LIF model in Wenquxing 22A, the model is pruned to fit the streamlined computing. We leave out the time parameters because the time in the pipeline is broken down into a series of timepieces, and we make this model driven by the order-based spike inputs. In this way, the simplified streamlined LIF model can be mathematically described below:
\begin{equation}
S_{current}=S_{previous} + V_{valid} -V_{leak} \label{fm-smlif}
\end{equation}
where $S_{current}$ is the previous state of this neuron and $V_{valid}$
is the input voltage computed by the spikes process. The value
$V_{leak}$ represents the LIF model's leakage voltage.
The current state, $S_{current}$, will compare with a threshold
voltage and will be reset to an initial voltage if $S_{current}$ is
greater than the threshold and the neuron fires a spike. If
$S_{current}$ is smaller than the threshold, the current state will be
stored in the neuron memory. This LIF model is based on the
order of spikes. The time variable has been removed and
substituted by the order of the occurrence of the spikes \cite{bi1998synaptic}.
Figure \ref{fig-workflw} illustrates the working flow of updating neurons at
the same level of SNN in an update cycle and the LIF model using Formula \ref{fm-smlif} to update is defined as the streamlined LIF model. Based on the streamlined LIF model, the updating process divide the SNN computing into 3
main parts. The spikes processing comes first. In this process,
spike sequences will be processed with synapses to generate
valid input spikes for neurons which will be updated later.
Then neurons receive valid data and begin updating. In this
stage, neuromorphic processors update neurons depending on
their neuron model, such as the LIF model. A neuron might
set a fire because its membrane potential is higher than a
threshold. In this case, its membrane potential will be reset to
an initial value. Or the updated neuron state will be stored
because the membrane potential is less than the threshold.
Finally, whether a neuron is fired or not will cause different
results of STDP in the process of synapse updating. 

Although there are massive complex mathematical models proposed by Izhikevich \cite{izhikevich2003simple}(Izh model) and Hodgkin–Huxley \cite{hodgkin1990quantitative} (HH model), current training rules are not suitable for the complex ones.

\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{fig/twostdp.pdf}
\caption{(A)Typical “Time-based” STDP;(B) “Oder-based”
STDP.}\label{fig-two}
\end{figure}

\subsection{Binary Stochastic STDP}

We utilize a hardware-friendly method, the binary synaptic weights \cite{frenkel2019morphic} to implement the synapse model. The limited weights in this method may cause
a drop in accuracy. To avoid that, the idea is to use the efficient STDP learning rule, the Binary Stochastic STDP (BS-STDP). A typical STDP function $S(t_{post})$ is shown in Figure \ref{fig-two}A, where $\Delta t$ is the time between a post-synaptic spike at time $t_{post}$ and a pre-synaptic spike at time $t_{pre}$. The STDP is classified as the “time-based” STDP and will be enabled when $\Delta t$ ranges from $-T_{max}$ and $+T_{max}$. Figure \ref{fig-two}B illustrates a different type of STDP, which we call “order-based” STDP or event-based STDP \cite{zhong2021spike}. The time variable has been removed and substituted by the order of the 
occurrence of the spikes \cite{bi1998synaptic}. Order-based STDP only needs to keep the ordered list of pre-and post-synaptic spikes without any timestamp, which simply contains the sequence of spikes \cite{thorpe1998rank}. Both types of STDP can enhance the connection when the pre-neurons fire spike earlier than post-neurons, so-called Long-term Potentiation (LTP), and depress synaptic weights when pre-spike occurs later than post-spike which is named Long-term Depression (LTD).

The BS-STDP is an order-based STDP that contains 2 stages, LTP and LTD. The LTP process is enabled first. This process will preserve synapses (set the synaptic weight to “1”) that transmit and contribute to neuron firing. This process helps synaptic training avoid overlearning. Synapses of the current neuron are depressed with a probability, called LTD probability, which is normalized with the following formula:

\begin{gather}
P_{LTD}=1024\times\frac{\Delta w}{w_sum}\notag\\
\Delta w=w_{sum}-w_{exp}
\label{fm-pltd}
\end{gather}

where $w_{sum}$ is the number of actual active synapses and $\Delta w$ is the difference between the number of actual active synapses and the number of expected active synapses ($w_{exp}$). If$\frac{\Delta w}{w_{sum}}\le 0$, this probability will be directly set to “0”. A random 10-bit number $x$ will be generated by a 16-bit LSFR to compare with the LTD probability. If $x\geq P_{LTD}$, the synaptic weight will be set to "0". After these two
processes, the updated synaptic weights are stored back in the synapse memory. The full process of Synapse Update is shown in Figure \ref{fig-stdp}.

\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{fig/bsstdp.pdf}
\caption{The Process of the Binary Stochastic STDP
learning rule.}\label{fig-stdp}
\end{figure}

\subsection{Recent Researches of Neuromorphic Hardware}

To achieve bio-inspired computation, neuromorphic engineering plays a significant role. Analog, digital, and analog-digital-mixed systems for SNN have been introduced in recent research. Previous works demonstrated the possibility of efficient SNN computing. Tanjic \cite{pei2019towards} chip integrates the two approaches, ANN and SNN, to provide a hybrid, synergistic platform, and adopts a many-core architecture, reconfigurable building blocks, and streamlined data flow with hybrid coding schemes. It combines both SNN and ANN to provide a hybrid training strategy. Loihi \cite{davies2018loihi} supports 4,096 on-chip cores and 1,024 neural units per core. TrueNorth is another brain-inspired neuromorphic computing architecture\cite{merolla2011digital}\cite{preissl2012compass}. It implements the “gray matter” connection in a short range with the crossbar memory and the “white matter” long-range connection through a spike-based message-passing network.
As for open-source implementations, Frenkel \cite{frenkel20180} proposes a 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor (ODIN). This processor achieves a minimum energy per synaptic operation (SOP) of 12.7pJ. Neftci \cite{neftci2014event} uses an event-driven variation of CD to train an RBM constructed with IF neurons. SpiNNaker2 from \cite{yan2019efficient} is a large supercomputer architecture with many cores and parallel computing capabilities.

However, an accelerator is not usable without a control core.  The idea is to consider combining both the accelerator and general-purpose processor by integrating the SNN function unit into the processor pipeline, exposing the SNN extension instruction set. The next section describes our custom SNN extension instructions based on RISC-V ISA to achieve SNN computing in high computational granularity.


\begin{table}[htbp]
\centering
\scriptsize
\caption{The Full Encoding Definition of RV-SNN Extension 1.0.}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{1}{|c|}{Category} & Name  & \multicolumn{1}{c|}{Encoding   Defination} \\ \hline
\multirow{4}{*}{Neuron}        &VLEAK  & 0000100\_00000\_?????\_001\_?????\_0001011 \\ \cline{2-3} 
                               &NADD   & 0000000\_?????\_?????\_010\_?????\_0001011 \\ \cline{2-3} 
                               &SGE    & 0000000\_?????\_?????\_100\_?????\_0001011 \\ \cline{2-3} 
                               &SLS    & 0000001\_00000\_?????\_001\_?????\_0001011 \\ \hline
\multirow{2}{*}{Synapse}       &SUP    & 0000011\_?????\_?????\_001\_?????\_0001011 \\ \cline{2-3} 
                               &LTD    & 0000000\_?????\_?????\_110\_?????\_0001011 \\ \hline
\multirow{6}{*}{Other}         &ANDS   & 0000000\_?????\_?????\_000\_?????\_0001011 \\ \cline{2-3} 
                               &RPOP   & 0000000\_00000\_?????\_001\_?????\_0001011 \\ \cline{2-3} 
                               &NLD    & ????????????\_?????\_101\_?????\_0001011   \\ \cline{2-3} 
                               &NST    & ???????\_?????\_?????\_011\_?????\_0001011 \\ \cline{2-3} 
                               &INF    & 0000010\_00000\_?????\_001\_?????\_0001011 \\ \cline{2-3} 
                               &SINIT  & ????????????????\_?\_111\_?????\_0001011   \\ \hline
\end{tabular}
\label{tb-encoding}
\end{table}

\section{RV-SNN Instruction Extensions  1.0}
RISC-V ISA leaves the space for
customizing opcode to add instructions for domain-specific
accelerators. To integrate the streamlined LIF model into the pipeline in a generic processor, we propose the RV-SNN 1.0 to realize the event-driven neuron model and the BS-STDP.

Table \ref{tb-encoding} shows full instructions encoding the definition of RV-SNN 1.0. This
extension has 4 types of instructions based on RISC-V
architecture: R-type, S-type, L-type, and U-type. These are
illustrated in Figure \ref{fig-riscvop}.

This section will describe the RV-SNN 1.0 extension in 3 parts:
LIF neuron update, synapses update, and other instructions in
RV-SNN. The detailed function descriptions are as follows.

\begin{figure}[htbp]
%\centering
\includegraphics[width=3.5in]{fig/riscvop.pdf}
\caption{Instruction Types in RV-SNN Extension (adopted from RISC-V ISA types).}
\label{fig-riscvop}
\end{figure}

\subsection{Extended Instructions for Streamlined LIF Neuron Update}

According to Section 2, the streamlined LIF model is driven by order-based input spikes. it needs to count the valid spikes from the previous layer before being delivered to the streamlined LIF neuron. And these inputs are calculated with the earlier state of updating neurons. The population of these inputs is the $V_{valid}$  and the earlier state is the $S_{previous}$ in Formula \ref{fm-smlif}. The value, $V_{leaky}$, is a predefined parameter by the instruction “VLEAK”.

Three instructions are used in this stage: “NADD” indicates “Neuron Add” and controls the accumulation of those 3 parameters in Formula \ref{fm-smlif}. The “SGE” instruction compares the current neuron’s membrane potential with the threshold. “SLS” instruction makes a register in SNN extensions shift left for the allocation of the least-significant bit which records if the current neuron has fired or not. 

\subsection{Extended Instructions for BS-STDP}

The synapse update process will modify the synaptic weights of the current neuron based on the input stream and the neuron firing state using the BS-STDP. 
At first, the LTP process of BS-STDP is enabled. “SUP” launches the LTP process in SNN extensions. Then “LTD” instruction enables the LTD process. Synapses of the current neuron are depressed with the LTD probability in Formula \ref{fm-pltd}. The Synapse Update stage launches the BS-STDP to change the value of binary synaptic weights between “0” and “1”.

In software as well as hardware, there are various options for representing synaptic weights to compute STDP. We need to apply the synaptic weights which make STDP hardware-friendly. Thus, here we focus on the binary weight model. Once restricted to 1-bit weights, small $\Delta w$ is no longer to be considered. The weights directly represent 2 states: weights are set to “1” for the “on” state and “0” for the “off” state.  For potentiation, binary STDP needs a pre-provided potentiation probability P which computes with a random number $x$ which is generated by a 16-bit LSFR in the LTD process. If $x\le P$ the synaptic weight will be set to “1”. And for depression, there also is a depression probability. But this value depends on the updated synaptic weights (see Formula \ref{fm-pltd}.). Like the potentiation process, a stochastic number $x$ will be generated, and if $x\le P_{LTD}$, the weights will be set to “0”.

\subsection{Other Instructions}

In the workflow of SNN, the spikes processing comes first. In this stage, those valid spikes will be counted, which is checking the valid synapses and letting spikes spread through those valid ones from pre-neuron to post-neuron. The length of the list that keeps spikes from the previous layer depends on the neuron number in the previous layer. In this process, 2 instructions are needed. “ANDS” instruction performs bitwise AND operation between spikes and synaptic weights. The result of ANDS will be processed by “RPOP” which counts the population or valid spikes in a 64-bit register. 



To load and store states of neurons and weights of synapses, “NLD” and “NST” are designed, which indicate “Neuron/Synapse Load” and “Neuron/Synapse Store”. “INF” operation sets and provides a “teacher signal” for the supervised network. Several parameters need to be set to initialize SNN in Wenquxing 22A: “SINIT” can set the initial state of each neuron, which is also the reset value after neuron firing. All these meta parameters are stored in SNN special registers (described in Section 4) accessible for SNN computing in Wenquxing 22A.
   We integrated our SNN extension instructions into our baseline, NutShell, which is discussed in the next section
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/nu.pdf}
\caption{The Structure of the LIF Neuron Unit.}
\label{fig-nu}
\end{figure}

\section{Hardware Implementation}


\begin{figure*}[htbp]
\centering
\includegraphics[width=6in]{fig/micro.pdf}
\caption{Overall Architecture of Wenquxing 22A (Baseline: NutShell).}
\label{fig-micro}
\end{figure*}



The micro-architecture of Wenquxing 22A is designed based on the NutShell processor. The hardware implementation of Wenquixng 22A is to achieve RV-SNN 1.0, the streamlined LIF model, and the BS-STDP. SNN Unit is realized in the executive stage in the pipeline, and some other components of this processor are modified to fit the SNN computing.


\subsection{Streamlined LIF Neuron Unit}

To reduce the time consumption of computing under limited hardware conditions and, most importantly, to integrate the LIF model into the pipeline so that RV-SNN can be used, using a simplified streamlined LIF type is optimal. Though keeping the multiplication and division operations in the traditional LIF model described as Formula \ref{fm-tdlif} and computing it by multiplexing the original multiple and division units in the NutShell processor are considered, we give it up because the computing always blocks the pipeline, which makes the pipeline inefficient. Since the time factors of spikes in STDP can be substituted by the order of spikes\cite{yousefzadeh2018practical}, the idea of removing the time parameters in the LIF model occurs. The time-related parameters are removed from the calculation keeping several approximate concepts that store the essential information for neuron updating, such as the number of input spikes representing the input voltage, the leakage voltage to depress the neuron when it is stirless, and the previous neuron state of current updating one. As described in Section 2, we use Formula \ref{fm-smlif} to update the LIF model, which is defined as the Streamlined LIF model.


\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/spu.pdf}
\caption{The Structure of the Spike Process Unit.}
\label{fig-spu}
\end{figure}	


Based on that, the Streamlined LIF Neuron Unit is designed as Figure \ref{fig-nu} displays. The “NADD”, “SLS” and “SGE” instructions will be executed afterwards. After LSU (Load and Store Unit) reads the previous neuron state, the neuron unit begins to update the current neuron state. Figure \ref{fig-nu} displays the neuron unit structure.“NADD” adds 3 parameters: pre-state, valid input, and leakage voltage. The previous state of the updating neuron is loaded into the “nr” register.  The valid inputs are the AND operation result of input spikes and binary synaptic weights. “VLEAK” provides pre-defined leakage voltage. “SGE” deals with the comparison between the current membrane potential and the threshold. If the membrane potential is greater, the neuron unit will set the updating neuron to the initial voltage and fire a spike. This spike will be recorded in the “output” register and later shifted by “SLS” instruction.

Because this neuron unit demands the number of valid input spikes as soon as possible, the data before influxing into the streamlined neuron updating will be processed by the module named the spike process unit. The implemented spike process unit executes SNN extension instructions, “ANDS” and “RPOP”. “ANDS” controls executing the AND operation of input spikes and synapses. Then “RPOP” instruction counts the population of the resource register. Figure \ref{fig-spu} shows the structure of a spike process unit.


This process achieves the operation of calculating valid input spikes and counting their population as the input signal strength to the updating neuron.

\subsection{Binary Synaptic Weights Unit}

To apply the BS-STDP introduced in Section 2, the Binary Synaptic Weight Unit (SU) is designed as Figure \ref{fig-su} shows. The binary synaptic weights unit processes 2 stages, the LTP and LTD. First, in the LTD process, the “SUP” instruction updates the synapses depending on the input spikes sequence and the output spike in the “output” register. Then the LTD process is activated to depress the over-learning by a pre-given LTD probability as Formula \ref{fm-pltd} illustrates. In the LTD part, a 16-bit LFSR generates a 10 (out of 16) bits stochastic number to compare with this probability (instruction “LTD”). After the STDP process, the updated synapses will be restored to synapse memory by LSU. The STDP process can be bypassed and then the inference process will be enabled, which means we can achieve Online-Learning by controlling when STDP is active. The binary synaptic unit also handles “VINIT” and “VLEAK” instructions to set the “vinit” register and the “vleak” register.

\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/su.pdf}
\caption{The Structure of the binary Synaptic Weights Unit.}
\label{fig-su}
\end{figure}

\subsection{Overall Architecture of Wenquxing 22A}

As mentioned above, we adopt our baseline, the NutShell processor, to support SNN computing. Figure \ref{fig-micro} shows the micro-architecture of Wenquxing 22A. The SNN Unit (shown as SNNU) is added to the execution stage of the pipeline. Three stages in the SNN workflow are handled by SPU (Spiking Process Unit), NU (Neuron Unit), and SU (Synapse Unit). All these components are integrated into the SNN unit. The SNN Specifical Register File is defined along with the General Registers File and ISU (Issue Unit) controls the instruction issuing, avoiding data hazards.

\subsection{Other Hardware Details}
\subsubsection{SNN Register File}
Since 32 general-purpose registers defined by RISC-V cannot meet the demand of SNN computing, we additionally design the SNN special registers in our processor, which consist of 5 registers: “vinit” register records the initial setting of SNN computing, “output” register contains the output spikes from the same SNN level, “nr” and “sr” represents neuron register and synapse register respectively and may temporarily keep the neuron state and synapse weights, and the“vleak” register stores the leaky voltage. Particularly, the “vinit” register is the control register of the SNN executing unit where the least-significant bit controls the STDP process, “1” for enabling and “0” for disabling. The structure of register files in Wenquxing 22A is shown in Figure \ref{fig-regfile}.

\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/regfile.pdf}
\caption{Registers Files Structure in Wenquxing 22A.}
\label{fig-regfile}
\end{figure}

\subsubsection{Hardware Verification}

We use Chisel HDL to develop Wenquxing 22A, which takes full advantage of functional programming and agile development. To guarantee the correctness of SNN extension instructions, a verification software platform, Abstract Machine (AM), is applied. AM is a minimal, modularized, and machine-independent abstraction layer of computer hardware \cite{am}. RTL simulations utilize Verilator, an open-sourced software tool converting Verilog to a cycle-accurate behavioral model in C++ or SystemC \cite{snyder2004verilator}. We ported AM to Wenquxing 22A. 


For the functional coverage test, the intrinsic embedded into C codes to use RV-SNN instructions is used. The c programs can call library functions in AM. And it can be compiled as binary files by the RISC-V cross-compile toolchain. These binary files are used as image files on the emulator of Wenquxing 22A generated by Verilator. Our test cases contain all of the instructions implemented in Wenquxing 22A for which we can claim that the functional coverage is 100\%. In addition, instead of testing the code coverage after Chisel generates the Verilog code, we apply the coverage test when the design is described by Chisel using the code coverage testing tool, ChiselTest\footnote{ChiselTest: https://github.com/ucb-bar/chiseltest} and sbt-scoverage\footnote{sbt-scoverage: https://github.com/scoverage/sbt-scoverage}. The test file importing  ChiselTest generates random stimulation fed to the SNN unit separated from Wenquxing 22A and sbt-scoverage monitors the behavior of the design to generate the coverage report. The code coverage report is shown in Figure \ref{fig-coverage}, in which the statement coverage is 98.99\% and the branch coverage is 100\%.


\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{fig/coverage.pdf}
\caption{Code Coverage Test for SNN Unit in Wenquxing 22A.}
\label{fig-coverage}
\end{figure}

To accelerate the verification process, we synthesize our design to the FPGA platform, Alveo U250 data center accelerator cards, successfully launching the Linux kernel. The launching is shown in Figure \ref{fig-linux}. 

\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{fig/linux.png}
\caption{Launching the Linux Kernel with a Simple Hello Massage on FPGA.}
\label{fig-linux}
\end{figure}


In addition, the tests for quick regression testing validate standard RISC-V instruction implementation correctness. These tests demonstrate that our design can complete the extensions of SNN computing. In addition, a simple 2-layer SNN to classify the MNIST dataset is trained on Wenquxing 22A, which will be shown in the next section.


\begin{table*}[hbtp]
\centering
\caption{Comparison Between Wenquxing 22A (this work) and Other SNN Systems.}
\scriptsize
\begin{tabular}{|p{2cm}|p{2cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{2cm}|p{1.5cm}|p{2cm}|}
\hline
\textbf{Name}                & \textbf{Structure}          & Learning Rule                & Resolution                 & Classification   Accuracy & Encoding                                & Power Consumption    & Hardware   Utilization \\ \hline
Querlioz 2013             & 784-300-10                 & STDP                         & high resolution & 93.50\%                   & Rate                          & N/A                  & N/A                    
\\ \hline
\multirow{2}{*}{Neftci 2014} & \multirow{2}{*}{784-500-40} & \multirow{2}{*}{STDP+SNN-CD} & 8-bit                      & 91.60\%                   & \multirow{2}{*}{Rate   Poisson}         & \multirow{2}{*}{N/A} & \multirow{2}{*}{N/A}   \\ \cline{4-5}
                             &                             &                              & 5-bit                      & 89.40\%                   &                                         &                      &                        \\ \hline
Diehl\&Cook 2015             & 784-6400-10                 & STDP                         & high resolution & 95.00\%                   & Rate   Poisson                       & N/A                  & N/A                    
\\ \hline
Yousefzadeh 2018             & 784-6400-10                 & STDP                         & 1-bit(24-bit   classifier) & 95.70\%                   & Rate   Poisson                          & N/A                  & N/A                    \\ \hline
ODIN 2018                & 784-10             &SDSP               &3-bit             & 91.90\%          & Rate   Poisson                &25.949 W    &63,411 LUT    \\ \hline
\textbf{This work}           & \textbf{784-40-10}             & \textbf{BS-STDP}                & \textbf{1-bit(1-bit classifier)}             & \textbf{95.75\%}          & \textbf{Rate   Poisson}                 & \textbf{5.055W}      & \textbf{56,487 LUT}    \\ \hline
\end{tabular}
\label{tb-cmp}
\end{table*}


\section{Experiment Results}

The MNIST data set \cite{lecun1998gradient}\cite{mnist} has 70,000 samples of handwriting digits from 0 to 9, among which 60,000 are used for training and 10,000 for testing. Every digital sample is a 28×28 gray-scale image with a maximum value of 255. We use Wenquxing 22A to classify this data set with the Binary Stochastic STDP learning rule. To compare the power expenses and hardware utilization, our design and ODIN processor \cite{frenkel20180}, a low-power digital neuromorphic accelerator, are both realized. Both Wenquxing 22A and ODIN are synthesized and implemented to the same Alveo U250, to make a comparison of power consumption between these two chips. The energy efficiency is measured by the following formula:
\begin{equation}
M_{eff}=\frac{Freq \times N_s}{W}  \label{fm-energy}
\end{equation}
where $M_{eff}$ is the energy efficiency of which unit is SOPS/W; $Freq$ indicates the frequency of processor; $N_s$  indicates the number of synaptic instructions or operations per cycle, and $W$ is the energy consumption of the SNN computing unit in processors. The energy consumption of the SNN computing unit in Wenquxing 22A, $W_{snn}$, is computed with $W_{snn} = W_{wenquxing22a} - W_{nutshell}$.


\subsection{Network Architecture}

The Network architecture is presented in Figure \ref{fig-netarch}. We design a simple 2-level supervised STDP-based SNN with pre-processing steps. We set the number of synapses per neuron to 28×28 to match the image format. The pre-processing steps applied to the MNIST data set are deskewing and image contrast enhancement. Otsu's method \cite{otsu1979threshold} is applied to image contrast enhancement to improve the performance of SNN training. To provide these samples to an SNN in Wenquxing 22A, a Poisson encoder is used to generate rate-based Poisson-distributed spikes to stimulate the input layer. This encoder converts the input data into a spike with the same shape. To generate the spike, we set a firing probability of a time cycle: $P=x$, where $x$ needs to be normalized to [0,1]. All these pre-processing steps are executed in Wenquxing 22A.

\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{fig/netarch.pdf}
\caption{The Structure of Spiking Neural Network Applied to This Work.}
\label{fig-netarch}
\end{figure}

For the output layer shown in Figure \ref{fig-netarch}, we choose 2-layer fully connected networks of \{10, 20, 40\} LIF neurons. We set the on-chip training with an embedded binary stochastic STDP rule.

In the 10-neuron network, a teacher signal is for the supervised learning used to correspond to the class of currently applied digits while other neurons are driven to a low-firing activity. As for the networks consisting of more than 10 neurons, we applied the active learning method. First, training images will be presented to 10 trained neurons as test samples. Therefore, we can get the error cases. Then those error samples will be presented as the training digits to new neurons supervised by the label of these images. Each neuron corresponds to a class of digits. Every training step will realize both test and training. Thus, based on test results of precious 10-neuron SNN, the rest extra neurons are trained with those digits that are not recognized successfully.

\begin{figure*}[htbp]
\centering
\includegraphics[width=6in]{fig/40n.pdf}
\caption{Retrieved Synaptic Weights with Different Neuron Numbers ($w_{exp}$=256).}
\label{fig-sp}
\end{figure*}
\subsection{Results Comparison}

According to recent works, we collect significant data on MNIST recognitions from different spiking computing systems. Table \ref{tb-cmp} shows the comparison of reported results on spiking versions \cite{querlioz2013immunity}\cite{neftci2014event}\cite{diehl2015unsupervised}\cite{yousefzadeh2018practical}\cite{frenkel20180} of the static MNIST dataset.  “Structure” indicates the network structure. For instance, “784-10” means that the network has 2 layers and the number of neurons per layer is 784 and 10. “Learning Rule” is the training method for each work. “Resolution” is the precision of synaptic weights in each spiking system. “Classify Accuracy (CA)” indicates the test results of each work. We can see that in all systems utilizing lower than 8-bit STDP, Wenquxing 22A in this paper (represented by “this work”, bold in Table \ref{tb-cmp}) performs better than those. In this case, Wenquxing 22A consumes 5.055 W working in 100MHz, and the peak effective power efficiency could reach 2.4 TSOPS/W working in 300MHz which exceeds other SNN neuromorphic processors. The effective power efficiency is in table \ref{tb-eff}.

\begin{table}[htbp]
\centering
\footnotesize
\caption{The Energy Efficiency and Area Efficiency Comparison of Wenquxing 22A with Existing SNN Platforms.}
\label{tb-eff}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Platform} &IBM TrueNorth &Intel Loihi & ODIN   & \textbf{Wenquxing 22A} \\ \hline
\textbf{Model}             & SNN       & SNN   & SNN    & \textbf{SNN}           \\ \hline
\textbf{Clock}             & Async     & Async & 300MHz & \textbf{300MHz}        \\ \hline
\textbf{SOPS/LUT}          & N/A       & N/A   & 1.24   & \textbf{339.9}         \\ \hline
\textbf{TSOPS/W}           & 0.4       & N/A   & 0.08   & \textbf{2.4}           \\ \hline
\textbf{pJ/SOP}            & N/A       & 4     & 12.7   & \textbf{3.57}          \\ \hline
\end{tabular}}
\end{table}

We apply the same network architecture based on the recognition of the MNIST in \cite{frenkel20180}, in which the images are compressed from 28×28 pixels to 16×16 pixels to fit the structure of synaptic weights from ODIN. From Figure \ref{fig-power}, we can see the total power consumption is reduced from 25.949 W (ODIN) to 5.055 W (Wenquxing 22A), which is approximately 5 times less than the ODIN processor. According to \cite{frenkel20180}, ODIN has 0.08 TSOPS/W energy performance while Wenquxing 22A could reach 2.4 TSOPS/W. The result of power consumption is shown in Figure \ref{fig-power}.

\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/power.pdf}
\caption{ Comparison of Power Consumption Between Wenquxing 22A and ODIN.}
\label{fig-power}
\end{figure}

According to \cite{urgeseinterfacing}, the ODIN processor is generated inside the Rocket Chip generated from Chipyard Repository \cite{amid2020chipyard}, which is a control core for the ODIN processor. Since the Nutshell processor is the baseline of our design, we choose the Nutshell processor used as the control core for ODIN instead of the Rocket Chip in this comparison to make sure the difference between Wenquxing 22A and ODIN is only relevant to SNN computing. In this case, the hardware utilization report in Table \ref{tb-harduse} indicates that Wenquxing consumes fewer hardware resources. It is worth mentioning that, compared with its baseline, Wenquxing 22A only uses 1,288 more LUT in FPGA implementation due to the multiplexing of other computing units from NutShell and the pruned neuron model computing avoiding the multiply and division calculation in neuron updating. Detailed utilization of hardware resources is illustrated in Table \ref{tb-harduse}. 

\begin{table}[htbp]
\centering
\scriptsize
\caption{Hardware Utilization of Wenquxing 22A and ODIN.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Processor}&\textbf{Platform} & \textbf{LUT}  & \textbf{FF}   & \textbf{BRAM} \\ \hline
ODIN              &Alveo U250 & 8,212 & 5,962 & 9.5  \\ \hline
\textbf{This work}&\textbf{Alveo U250}          & \textbf{1,288} & \textbf{302} & \textbf{0}    \\ \hline
\end{tabular}}
\label{tb-harduse}
\end{table}

\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fig/diffneu.pdf}
\caption{Classification Accuracy with Different Neuron Numbers (maximum CA 95.75\%).}
\label{fig-diffneu}
\end{figure}

\subsection{Wenquxing 22A Performance}

The network is trained as 10-neuron, 20-neuron, and 40-neuron SNN with BS-STDP in the hidden layer. The results of these networks were compared. Figure \ref{fig-sp} shows retrieved synaptic weights from Wenquxing 22A of each network, and Figure \ref{fig-diffneu} illustrates the Classification Accuracy (CA) of 10,000 MNIST test samples, which shows the maximum accuracy of binary weight SNNs ranging from 10, 20, and 40 neurons on Wenquxing 22A is 80.94\%, 86.91\%, and 95.75\%. The increasing CA indicates that it is difficult to classify some digits with high similarity with a small number of neurons. While the number of neurons in the output layer increases with the same $w_{exp}$, configurations with more neurons can yield higher recognition accuracies.


\section{Discussions}

In Section 5, the experiment results show that the accuracy of recognizing the MNIST dataset has a maximum of 95.75\% in our binary spiking system. However, the performance of convolutional neuron networks (CNNs) recognizing the MNIST is much better than the SNN system, of which the accuracy can reach more than 99.0\% \cite{baldominos2019survey}. The possible reason for this is that the trait of SNNs is using spikes and limited synaptic weights to transfer information, while CNNs consist of abundant floating-point calculations which can bring rich messages and features. Another possible reason is the learning rules. Unlike CNNs where backpropagation is used easily, pure SNNs utilize the forward-propagation learning rules, such as STDP and SDSP \cite{brader2007learning}. However, SNN driven by events has higher energy efficiency than CNNs because of the low and controllable fire rate. In our work, we try to get a balance between power consumption and performance, so we propose the RV-SNN extensions, adapt the LIF model, and use the BS-STDP. The following context will discuss how these efforts work.

\subsection{The RV-SNN Extensions}

The RISC-V ISA is getting more attention from the neuromorphic computing field. Not only because of its Modular design and open-source nature, but also its flexibility in customizing the instruction extensions. The extensions serve the purposes of research and the application scenarios of RISC-V ISA. We use the strength of RISC-V ISA and develop the RV-SNN extensions combing the advantages of CPUs and SNN accelerators by which we achieve the neuromorphic SNN processor. These instructions are based on the fundamental calculation steps of the LIF model and BS-STDP, which are in high granularity and orthogonal. 

This work shows the possibility of applying instruction extensions for SNN to work properly. It can be integrated into the CPU as well as the accelerators which can be a detachable part of the neuromorphic computing processor.

\subsection{Streamlined LIF Neuron Model}

To achieve event-driven neuromorphic computing and make the implementation hardware-friendly (integrating the neuron model into the inorder pipeline), we need to reduce the calculation of the neuron model, which is the LIF model in this work. We propose an approach to simplify the neuron model without decreasing its performance in Spiking Neuron Networks and integrate it into the pipeline. The LIF model is order-based or event-based without the time parameters in our design. Thus, Wenquxing 22A avoids time-related computing and only keeps addition and subtraction, which makes it easy to integrate the SNN acceleration unit. To make the pipeline efficient, the original LIF model is pruned and streamlined to fit the calculation in the pipeline.

\subsection{Binary Synaptic Weights and Binary Stochastic STDP}
As a key component in SNN, the synapse model with limited synaptic weights is another idea to reduce power consumption in neuromorphic computing. As for the simplified LIF model, we assume these synapses in SNN only keep the “ON/OFF” states by using binary weight. This feature makes Wenquxing 22A could reach the peak effective energy efficiency of 2.4 TSOPS/W. The advantage of binary computing is that during Wenquxing 22A executes one synaptic operation, it computes 64 synapses at the same time. And every instruction related to this type of operation only costs a single clock cycle to finish. 
 

During the synapses updating, we also use a suitable learning rule for this limited situation. The Binary Stochastic STDP meets our expectations. It is hardware-friendly because of the binary encoding. However, this type of STDP has little flexibility for other synapse models, such as 8-bit, 16-bit, \emph{etc}. In Davies’ work \cite{davies2018loihi}, some programmable STDP learning rules provide researchers with more options in the utilization of neuron and synapse models. 

\section{Conclusions and Future Work}

In this work, we propose a highly efficient in-pipeline SNN computing solution,  Wenquxing 22A. We extend the execution unit in our baseline, Nutshell, to integrate the SNN computing unit into the pipeline. RV-SNN 1.0 Instruction extensions let our design tightly coupled. The streamlined LIF model converts all multiplication operations to additive operations which helps reduce hardware utilization. The 1-bit synaptic weights reduce the power consumption and Wenquxing 22A can handle 64 synapses at the same time with binary stochastic STDP.
To verify our processor, we use both software simulation and hardware verification. AM platform is used to provide a minimal, modularized, and machine-independent abstraction layer of the computer hardware for testing Wenquxing 22A. In addition, we implement the design on our FPGA platform to launch the Linux kernel and compile a series of test programs to test the SNN instructions. 
We compare our design with other SNN systems in terms of classification accuracy in the MNIST dataset, power consumption, and utilization of hardware resources. The power expenses are reduced by implementing the limited width synaptic weights and deploying SNN computing in a pipeline. Wenquxing 22A has good performance in the classification accuracy of the MNIST dataset. And it reduces the power consumption and utilization of hardware resources compared with the ODIN processor. 
Wenquxing 22A is developed by the Chisel HDL, which takes full advantage of functional programming and agile development. The use of agile development tools in the design is of great help to achieving efficient chip design in a short cycle \cite{yuzihao2019riscv}.

Our future work will try to achieve more complex neuron models, such as Hodgkin-Huxley models and Izhikevich models. To improve the orthogonality between RV-SNN and other RISC-V instructions, RV-SNN needs to be ameliorated, and more SNN extension instructions for different neuron models and learning rules will be supported.  The software (APIs) support will be completed. We will also tape out this chip in the future. Referring to the experience of designing the single neuromorphic processor core, we will explore the solution of the large-scale Network on Chip (NoC) for multi-core neuromorphic computing.


\bibliographystyle{plain}
\bibliography{references}


\end{document}

