\documentclass[twocolumn]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{listings}             % Include the listings-package
% Uncomment the following line to allow the usage of graphics (.jpg, .png, etc.)
%\usepackage[pdftex]{graphicx}
\newcommand{\Expect}{{\rm I\kern-.3em E}}
\newcommand{\rnddown}[2]{\lfloor #1 \rfloor_{#2}}
\newcommand{\abs}[1]{\left| #1 \right|}
% Comment the following line to deny the usage of umlauts and other non-ASCII characters
\usepackage[utf8]{inputenc}
\title{Fixed-Width Truncated Multipliers \\ Survey, Implementation and Proposal}
\author{Tuan Nguyen}
% Start the document
\begin{document}
\lstset{language=Matlab}          % Set your language (you can change the language for each code-block 
\maketitle

\begin{abstract}
Fixed-Width Multipliers are the basic components in many Digital Signal Processing (DSP) systems. In many cases, where the primary concerns are performance, area and power dissipation and controlled errors are acceptable, Truncated Multipliers are the favorite choice. The basic idea underlying Truncated Multipliers is that a number of least significant columns of Partial Products (PPs) are truncated to reduce area and power dissipation as well as increase performance. In this article, various techniques related to Truncated Multipliers are discussed, compared and contrasted. Second, three key techniques are implemented in Matlab to have the insight in those methods. Finally, a new proposed approach and preliminary results come at the end.
\end{abstract}
% Create a new 1st level heading
\section{Introduction}
%The importance of multipliers and its improvement
High-speed parallel multipliers are fundamental building blocks in digital signal processing (DSP) systems \cite{ma:1990}. In many cases, parallel multipliers contribute a large part of these systems. As a result, improvement in multipliers can lead to significant improvement in DSP systems. 

%Introduce about fixed-width multipliers and full multipliers
A typical case of multiplication in many DSP systems is the fixed-width multiplication, where both input and output are fixed-width numbers. In these systems, the 2n products produced by the parallel multipliers are rounded to n bits to avoid growth in word size. 

%Introduce about truncated multipliers and its problems
As presented in \cite{lim:1992}, truncated multiplication provides an efficient method for reducing the hardware requirements of rounded parallel multipliers. With truncated multiplication, only the n+k most significant columns of the multiplication matrix are used to compute the product. The error produced by omitting the 2n-k least significant columns and rounding the final result to n bits is estimated, and this estimate is added along with the n+k most significant columns to produce the rounded product. Although this leads to additional error in the rounded product, various techniques have been developed to help limit this error \cite{king:1997}, \cite{schulte:1993}, \cite{stine:2003}.

%Introduce very briefly about various techniques' contribution

%Introduce about my proposal

%Organization of the rest of paper

However, as far as my knowledge, all current contributions based on the assumption that the probability of inputs are uniform (each bit of inputs are assumed having equally probability between 0 and 1). We believe that this assumption is too tight and in many cases, is not guaranteed. Therefore, in this research first we propose the solution for the cases where the input distribution is arbitrary known distribution. Second, we propose solution for the case where input distributions are unknown but belong to a class of distribution by using minimax approach.

The paper is organized as follow: Section II summaries the state of the art in truncated multipliers, Section III states the problem in some uncertainties. Section IV includes some solutions for problems pointed out in Section III. Section V reports the experimental results and some discussions are included in Section VI. Finally, Section VII is the conclusion and future works. 

\section{Truncated Multipliers}
A Truncated Multipliers is a system that get two inputs $(A, B)$ and produce output $\hat{Z} \sim Z = A \cdot B$ as shown in Figure \ref{fig:tm}. 
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.5]{imgs/tm.png}
\caption{Truncated Multipliers}
\label{fig:tm}
\end{figure}

For convenience, we assume that an unsigned n-bit multiplicand A is multiplied by an unsigned n-bit multiplier B to produce an unsigned 2n-bit product Z. Figure \ref{fig:44matrix} show an example with 4x4 multiplication matrix. For fractional numbers, the values for A, B and Z are:

\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.5]{imgs/mul.png}
\caption{4x4 Multiplication Matrix}
\label{fig:44matrix}
\end{figure}

\[
A = \sum_{i=0}^{n-1}{a_i \cdot 2^{-n+i}} \hspace{1 cm} B = \sum_{j=0}^{n-1}{b_j \cdot 2^{-n+j}}\\
\]
\[
Z = A \cdot B = \sum_{i=0}^{n-1}\sum_{j=0}^{n-1} (a_i \cdot b_j) \cdot 2 ^{-2n+i+j} =  \sum_{k=0}^{2n-1} {\pi_k \cdot 2^{-2n+k}}
\]

The \textbf{full multipliers} first compute all $n^2$ partial products (PP) $\pi_{ij} = a_i \cdot b_j$, then summation of weighted PP to form double precision product and then add a 1 at the $n^{th}$ least significant bit position of the product and finally truncate the least significant n bits of the 2n bit sum \cite{swartzlander:1999}:
\[
\hat{Z}_{true} = \left\lfloor \sum_{k=0}^{2n-1} {\pi_k \cdot 2^{-2n+k}} + \frac{ulp}{2} \right\rfloor_n
\]
in which, $\left \lfloor \cdot \right\rfloor_n$ is the rounding down to n bit function and unit in last place (ulp) $ulp = 2^{-n}$.

This method, in one hand, guarantee that the absolute error is not larger than 0.5 ulp, in the other hand, have to pay off by complexity, area, power consumption. In applications where only the single-precision product is required, we don't need to compute the least significant part of the product exactly (as originally proposed by Lim in \cite{lim:1992}). It named truncated multipliers/multiplication or single-precision or fixed-width multipliers.

In \textbf{truncated multipliers}, only the $(n+k)$ most significant columns of the multiplication matrix are used to compute the product. Figure \ref{fig:44truncatedmatrix} shows an example of 4x4 bits. 

\[
Z_{trunc}  =  \sum_{i+j = n-k}^{i+j = 2n-1} {(a_i \cdot b_j) \cdot 2 ^{-2n+i+j}}
\]

The error produced by omitting the $(n-k)$ least significant columns (reduction error) and rouding the final result to n bits (rounding error) is estimated, and this estimated is added along with the $(n+k)$ most significant columns to produce the rounded product.  

%\begin{eqnarray*}
%E_{total} & = & \left |{\hat{Z} - Z}\right| =  E_{reduction} + E_{roudning}\\
%E_{reduction} & = & \left |{Z - \sum_{i+j = n-k}^{i+j = 2n-1} {(a_i \cdot b_j) \cdot 2 ^{-2n+i+j}}}\right| \\
%E_{rounding} & = & \left| \sum_{i+j = n-k}^{i+j = 2n-1} {(a_i \cdot b_j) \cdot 2 ^{-2n+i+j}} - \hat{Z}_t\right| \\
%\end{eqnarray*}


\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.5]{imgs/44matrix.png}
\caption{4x4 Truncated Multiplication Matrix}
\label{fig:44truncatedmatrix}
\end{figure}

\[
\hat{Z}_{est}  =  \left\lfloor Z_{trunc} + \text{\textbf{C}}\right\rfloor_n 
\]

Various techniques have been developed to improve this estimation and reduce the total error. 
\subsection{Constant Correction}
In \cite{lim:1992}, Lim proposed that correction value \textbf{C} (which he named fixed bias correction) should be the expectation (average) of truncated columns with \emph{an assumption that all PPs are independent and identically distributed (i.i.d.) with uniform distribution}. In fact, Lim's correction value \textbf{C} equals to the expectation of reduction error:
\[
\textbf{C} = \Expect{[E_{red}]} = \frac{1}{4} \sum_{q=0}^{n-1}(q+1)\cdot 2^{-2n+q}
\]

Lim's ideas then were improved by Schulte and Swartzlander\cite{schulte:1993} by considering not only reduction error but also rounding error when computing correction value \textbf{C}. To estimate the expected value of the rounding error, \emph{it is also assumed that the distribution of any product bit $p_k$ is uniform}. Moreover, Schulte and Swartzlander also suggest that \textbf{C} should be rounded to $(n+k)$ bits.

\[
\textbf{C}= \Biggl\lfloor \Expect{[E_{total}]}\Biggr\rfloor_{(n+k)} = \Biggl\lfloor \Expect{[E_{red} + E_{rnd}]}\Biggr\rfloor_{(n+k)}
\]

\subsection{Variable Correction}
A method named data-dependent truncation then was proposed by King and Swartzlander\cite{king:1997}. In previous contribution, \textbf{C} is constant and  independent from data (inputs). With data-dependent truncation, the values of the PPs in column $(n-k-1)$ are used to estimate the error due to cut off the $(n-k)$ least significant columns. This is accomplished by adding the PP in column $(n-k-1)$ to column $(n-k)$. To compensate for the rounding error, a constant is added to columns $2n-2$ to $2n-k$ of the multiplication matrix.

\[
\textbf{C} = \left\lfloor \sum_{i,j}^{i+j = (n-k-1)} (a_i \cdot b_j) \cdot 2^{-2n + (n-k-1) + 1} + \Expect{[E_{rnd}]} \right\rfloor_{n+k}
\]

In \cite{stine:2003}, Stine and Duverne proposed a new method named hybrid truncated multipliers, in which, combines both the constant and variable correction methods. They introduced a new parameter p, which represents the percentage of variable correction to use for the correction. The calculation of the number of variable correction bits is the following utilizing the number of bits used in the variable correction method, $N_{variable}$:

\[
N_{hybrid} = floor( N_{variable} \cdot p )
\]

Hybrid method still uses a correction constant to compensate for the rounding error. However, a new correction constant based on the difference between the new varibale correction constant and the constant correction constant, as following:
\[
C_{VCT'} = 2^{-2n-k-2} \cdot N_{hybrid}
\]
\[
C_{round} = \Biggl\lfloor C_{CCT} - C_{VCT'} \Biggr\rfloor_{(n+k)}
\]

In \cite{petra:2010}\cite{decaro:2013}, authors use optimization approach and numerical method to truncated multipliers. The correction value is considered as a function of the $(n+k+1)^{th}$ column (which is named IC - input correction vector). The authors used direct search to find the optimal points. However, as above methods, the authors also assumed that the input distribution is uniform.

\section{Minimax Approach}
Minimax approach is the effective to work with a wide range of problems in which the input distributions are unknown. The basic idea of minimax approach is to find the best solution such that minimize the costs in the worst case. The typical case is in the hypothesis testing \cite{poor1994introduction}. We assume that there are two possible hypotheses $H_0$ and $H_1$, corresponding to two possible probbility distributions $P_0$ and $P_1$, respectively. It can be written as:
\[
H_0: Y \sim P_0
\]
versus
\[
H_1: Y \sim P_1
\]

A decision rule $\delta$ for $H_0$ versus $H_1$ is any partition of the observation set $\Gamma$ into sets $\Gamma_1$ and $\Gamma_0$ such that we choose $H_j$ when $y \in S_j$. Decision rule $\delta$ can be considered as a function on $\Gamma$ given by:

\[ \delta(y) = \left\{
  \begin{array}{l l}
    1 & \quad \text{if $y \in \Gamma_1$ }\\
    0 & \quad \text{if $y \in \Gamma_1^c$}
  \end{array} \right.
\]
Naturally, we would like to assign cost to our decisions. $C_{ij}$ is the cost incurred by choosing hypothesis $H_i$ when hypothesis $H_j$ is true. With binary testing, we have four cost: $C_{00}, C_{01}, C{10}$ and $C_{11}$.

We can then define the conditional risk for each hypothesis as the average of expected cost incurred by decision $\delta$ when that hypothesis is true, as following:

\[
R_j(\delta) = C_{1j} P_j(\Gamma_1) + C_{0j} P_j(\Gamma_0), j = 0,1.
\]

\subsection{Bayesian Hypothesis Testing} 
\label{sec:bayes}
Assuming that we know the priori probabilities of the two hypotheses $\pi_0$ and $\pi_1 = 1 - \pi_0$, respectively. For given priors, we can define an average or Bayes risk as following:
\[
r(\delta) = \pi_0 R_0(\delta) + \pi_1 R_1(\delta).
\]

The Bayes decision rule computes the likelihood ratio for the observed value of Y and then makes its decision by comparing this ratio to the threshold $\tau$ as following:

\[ \delta_B(y) = \left\{
  \begin{array}{l l}
    1 & \quad \text{if $L(y) \geq \tau$ }\\
    0 & \quad \text{if $L(y) < \tau$}
  \end{array} \right.
\]

In which, the likelihood ratio and threshold can be computed as following:
\[
L(y) = \frac{p_1(y)}{p_0(y)}, y \in \Gamma
\]
\[
\tau = \frac{\pi_0 ( C_{10} - C_{00})}{\pi_1 (C_{01} - C_{11})}
\]
\subsection{Minimax Hypothesis Testing}
If prior is unknown, which means ($\pi_0, \pi_1 = 1 - \pi_0$) is variable, we estimate the Minimax Risk as following:

\[
r(\pi_0, \delta) = \pi_0 R_0(\delta) + (1-\pi_0) R_1(\delta)
\]

To deal with the uncertainty of the inputs, the basic idea is to find the optimization in the worst case. Briefly, we try to find
\[
\min_\delta \max \{R_0(\delta), R_1(\delta)\} 
\]
Because given $\delta$, $r(\pi_0, \delta)$ is a line of $\pi_0$, we try to find
\[
\min_\delta \max_{\pi_0} r(\pi_0, \delta)
\]

For each prior $\pi_0$, let $\delta_{\pi_0}$ denote a Bayes rule of that prior (please look at \ref{sec:bayes}). Let $V(\pi_0) = r(\pi_0, \delta_{\pi_0})$. We can prove that $V(\pi_0)$ is the continuous concave function of $\pi_0$.

We also can prove that
\[
arg \min_\delta \max_{0\le \pi_0 \le 1} r(\pi_0, \delta) = (\pi_L, \delta_{\pi_L}) 
\]
in which, 
\[
\pi_L = arg \max_{\pi_0} V(\pi_0)
\]

\section{Robust Truncated Multipliers}
\subsection{Problem Statement}
Recall that in single-precision multipliers, an unsigned n-bit multiplicand A is multiplied by an unsigned n-bit multiplier B to produce an unsigned 2n-bit product Z. 
\[
A = \sum_{i=0}^{n-1}{a_i \cdot 2^{-n+i}} \hspace{1 cm} B = \sum_{j=0}^{n-1}{b_j \cdot 2^{-n+j}}\\
\]
\[
Z = A \cdot B = \sum_{i=0}^{n-1}\sum_{j=0}^{n-1} (a_i \cdot b_j) \cdot 2 ^{-2n+i+j} =  \sum_{k=0}^{2n-1} {\pi_k \cdot 2^{-2n+k}}
\]
In truncated multipliers, only the $(n+k)$ most significant columns of the multiplication matrix are used to compute the product.

\emph{\textbf{General Problem}}: 

Assuming that input binary digits are i.i.d with the distribution of (1,0) are $(\alpha, 1 - \alpha)$, in which $0 \le \alpha \le 1$, we need to design a truncated multipliers M such that minimize the mean error E with every $\alpha$. In other words, we need to design a truncated multipliers M such that: 
\[
\min_{M} \max_{0 \le \alpha \le 1} E (M, \alpha) 
\] 

This is the nested optimization problem where one optimization (find the worst probability given a solution) is embedded in another optimization (find the best solution for the worst case). As a result, the general problem is really hard to solve completely. Therefore, it needs to be simplified to solvable forms as stated in next sections.

\subsection{Minimax Correction Constant}
In Minimax Correction Constant, we reduce M, the truncated multipliers, to the truncated multipliers that uses Correction Constant method. In other word, we need to find a Correction Constant C such that:

\[
\min_{C} \max_{0 \le \alpha \le 1} E(C, \alpha)
\]
in which $E(C, \alpha)$ is the absolute average error given the prior $\alpha$ and the constant C. 
\[
E(C, \alpha) = \abs{\Expect {(\rnddown{Z_{trunc} + C}{n} - Z)}}
\]

\subsection{Minimax Variable Correction}
In Minimax Correction Constant, the truncated multipliers M is the Variable Constant truncated multipliers which use the column $(n-k-1)$ (notated $IC_{n-k-1}$ to predict the constants. In other word, we need to find a function F such that:
\[
\min_{F} \max_{0 \le \alpha \le 1} E(F, \alpha)
\]
in which $E(F, \alpha)$ is the absolute average error given the prior $\alpha$ and the function F. 
\[
E(F, \alpha) = \abs{\Expect {(\rnddown{Z_{trunc} + F(IC_{n-k-1})}{n} - Z)}}
\]
In general, F is defined on $2^{n-k}$ values of column $IC_{n-k-1}$. To solve above problem, it needs to be simplified more, as shown in next section.

\section{Solutions}
\subsection{Minimax Correction Constant}
\subsubsection{Minimize Reduction Error}
Given T is the sum of the $(n - k)$ truncated columns as shown on Figure \ref{fig:truncatedpart}. Given input digits is i.i.d with the probability of digit 1 is $\alpha$, the probability of each 1 in PPs is $\alpha^2$. Then we can compute AS, the average of the sum of truncated columns as following:

\[
AS = \Expect[T] = \alpha^2 \sum_{i=0}^{n-k-1} {(i+1)\cdot 2^{-2n+i}}
\]

\begin{figure}[hbtp]
\centering
\includegraphics[width = 0.5\columnwidth]{imgs/truncatedpart.jpg}
\caption{Truncated Columns}
\label{fig:truncatedpart}
\end{figure}

$AS$ is the quadratic function against prior $\alpha$ as shown on Figure \ref{fig:sumvsprior}

\begin{figure}[hbtp]
\centering
\includegraphics[width = \columnwidth]{imgs/sum_vs_prior.jpg}
\caption{Sum vs Prior}
\label{fig:sumvsprior}
\end{figure}

To compensate for the truncated columns, we add a correction constant to the remained columns. Now we try to minimize the worst case Reduction Error.
\[
\min_C \max_{\alpha} \abs{E_{red}(C, \alpha)} 
\]
in which

\begin{align*}
E_{red} (C, \alpha) & = \Expect \{(A\cdot B - T + C) - A \cdot B\}\\
& =  \Expect \{C - T\}\\
& =  C - \Expect \{T\}\\
& =  C - AS\\
\end{align*}
Our problem turns out to be 

\[
\min_C \max_{\alpha} \abs{C - AS} 
\]
Because AS is a continuous function from $0$ to $1$, it is not too hard to see that the best worst case happened when the constant $C$ get the value:
\[
C = \frac{max(AS) - min(AS)}{2} = 0.5 \cdot max(AS)
\]
It is shown more intuitively on Figure  

\begin{figure}[hbtp]
\centering
\includegraphics[width=\columnwidth]{imgs/minimax_red.jpg}
\caption{Minimax Reduction Error}
\end{figure}

\subsubsection{Minimize Total Error}
Recall that, in Total Error Optimization, we need to find a way to solve
\[
\min_{C} \max_{0 \le \alpha \le 1} \abs{E(C, \alpha)}
\]
in which $E(C, \alpha)$ is the average error given the prior $\alpha$ and the constant C. 
\[
E(C, \alpha) = \Expect {(\rnddown{Z_{trunc} + C}{n} - Z)}
\]

Unfortunately, with the Total Error, due to the characteristic of the rounding operation, currently we can not reduce the Total Error to the simple function of the Correction Constant and Prior (like the way we did with Reduction Error). However, fortunately, we still have a way to solve it, that is using the Numerical Method with Direct Search. But to use this method, we need to simplify the problem more.

First, we need to constrain Correction Constant. Because it is added to the remained columns ($(n+k)$ most significant columns), it should be rounded to $(n+k)$ bits after dot (or $(n+k)$ meaningful bits). Moreover, it also should less or equal than the maximum total error ME
\begin{align*}
ME &=\max_{A,B} \abs{\rnddown{Z_{trunc}(A,B)}{n} - A\cdot B}\\
&= \sum_{i = (-n-1)}^{(-n-k)}{2^i} + \max(T)\\
\end{align*}
Briefly, we search on $C = i \cdot u$ such that $i \in Z, i \le \frac{ME}{u}$ and $u = 2^{-n-k}$. Most of the cases, the total number of $i$ to search is small.

Second, we need to constrain prior $\alpha$ search space. In the scope of this report, $\alpha$ is searched with the step $\epsilon = 0.01$

The pseudo-code of the algorithm is shown as following:

\begin{lstlisting}[frame=single, basicstyle=\footnotesize]  
C = 0:2^(-n-k):ME;
P = 0:0.01:1;
E_min = inf;
for i = 1:length(C)
	E_max(i) = 0;	
	for j = 1:length(P)
		E_total(j) <- get_err(C, P);		
	end
	E_max(i) = max(E_total);
end
E_min = min(E_max);
\end{lstlisting}

The output of the algorithm can be shown on Figure \ref{fig:minmax_etotal} with $N = 5, k = 2$:
\begin{figure}[hbtp]
\centering
\includegraphics[width=\columnwidth]{imgs/minmax_etotal.jpg}
\caption{Minimax Total Error}
\label{fig:minmax_etotal}
\end{figure}

\subsection{Minimax Variable Correction}
(under research)
\section{Error Analysis and Comparisons}

\section{Discussion}

\section{Conclusion and Future Works}
% Uncomment the following two lines if you want to have a bibliography. Please do not forget to add an entry to your bibliography and reference it by using the \cite{} command
\bibliographystyle{unsrt}
\bibliography{document} 

% End of the document
\end{document}
