Our scheduling algorithm is implemented based on the USIMM\cite{chatterjee2012usimm} infrastructure.
In order to compare with other algorithms, we use the standard configurations and workloads
from USIMM-v1.3, and primarily consider three evaluation metrics:

\begin{enumerate}
\item Delay (or Performance)
\item Energy-Delay Product (EDP)
\item Performance-Fairness Product (PFP)
\end{enumerate}

In the remaining part, we'll first introduce the composition of USIMM-v1.3 dataset,
then evaluate our scheduling algorithm with these three metrics. For each metric, we first
give the precise numerical definition of the metric, then we'll show the experimental results.\\

\subsection{USIMM-v1.3 Dataset}
The standard dataset has 18 simulations. Each simulation, there's a configuration of the virtual machine and several
workload (memory visiting sequence).\\

There are two different configurations. Details are shown in Table.\ref{configtable}.

USIMM gives several workloads with a 2-letter abbreviation:

\begin{center}
\begin{tabular}{c c}
Abbr.		&	name	\\ \hline
c1 		    & comm1		\\
c2 		    & comm2		\\
bl 		    & black		\\
fr 		    & freq		\\
fa 		    & face		\\
fe 		    & ferret	\\
fl 		    & fluid		\\
sw 		    & swapt		\\
st 		    & stream	\\
\end{tabular}
\end{center}

These are tasks which can run alone. There's also a 4-threaded version of canneal, denoted by `MT-canneal' or `MTc'.\\

For each simulation, use an digit `1' or `4' for number of channels, and use a `-' separated sequence of tasks for threads running on the virtual machine.\\

\begin{table*}
\small
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Parameter                                & 1 channel                       & 4 channel                       \\ \hline \hline
Processor clock speed                    & 3.2GHz                          & 3.2GHz                          \\
Processor ROB size                       & 128                             & 160                             \\
Processor retire width                   & 2                               & 4                               \\
Processor fetch width                    & 4                               & 4                               \\
Processor pipeline depth                 & 10                              & 10                              \\ \hline
Memory bus speed                         & 800 MHz (plus DDR)              & 800 MHz (plus DDR)              \\
DDR3 Memory channels                     & 1                               & 4                               \\
Ranks per channel                        & 2                               & 2                               \\
Banks per rank                           & 8                               & 8                               \\
Rows per bank                            & 32768 $\times$ NUMCORES         & 32768 $\times$ NUMCORES         \\
Columns (cache lines) per row            & 128                             & 128                             \\
Cache line size                          & 64 B                            & 64 B                            \\
Address bits (function of above params)  & 32+log(NUMCORES)                & 34+log(NUMCORES)                \\
Write queue capacity                     & 64                              & 96                              \\
Address mapping                          & row:rank:bank:chnl:col:blkoff   & row:col:rank:bank:chnl:blkoff	 \\
Write queue bypass latency               & 10 cpu cycles                   & 10 cpu cycles                   \\ \hline
\end{tabular}
\end{center}
\caption{Configurations detail}
\label{configtable}
\end{table*}

\subsection{Delay (or Performance)}

The delay(or Performance) is computed across all simulated configurations, and is measured as the sum of execution times(SoET) for all programs using 10M cycles as unit. The values and comparation with ``close'' algorithm can be found in \ref{datatable}.\\

Figure.\ref{performancefig} shows the performance of our algorithm.
Since the sum of execution times (SoET) differed greatly, we compare it with given ``close'' algorithm and figure the normalized performance promotion (SoET of our algorithm divided by SoET of ``close'').

\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{fig1.png}
\end{center}
\caption{Performance promotion}
\label{performancefig}
\end{figure}

``8task-4'' and ``16task-4'' represents simulation with name ``fl-fl-sw-sw-c2-c2-fe-fe'' and ``fl-fl-sw-sw-c2-c2-fe-fe-bl-bl-fr-fr-c1-c1-st-st''. \\

From Figure.\ref{performancefig}, we can see that our algorithm generally performs better than ``close'' which is the best one among given scheduling managers. The only exception is ``st-st-st-st-1'' which means running four ``stream'' thread on a one channel configuration. \\

Overall, our algorithm has about 0.83\% increasement in performance than ``close''. Much better than any other given schedulers.\\

\subsection{Energy-Delay Product (EDP)}

Another metric is the product of energy and delay. It was directly given by the USIMM's framework with ``J$\cdot$s'' as unit.

Similar to performance, we compare our EDP with ``close'''s, and plot the promotion as Figure.\ref{energyfig}

\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{fig2.png}
\end{center}
\caption{EDP promotion}
\label{energyfig}
\end{figure}

It shows that our algorithm consumes less energy than ``close'' among most of the simulations. For simulation ``c2-4'' which means running ``comm2'' with a 4-channel configuration the exact ratio is about 1.000073 very close to 1.\\

For other simulations, our algorithm gives notable energy saving. For simulation ``fl-sw-c2-c2-1'', it saves about 3.5\% energy.
Overall, our algorithm saves about 1.3\% energy compared with ``close''.\\

\subsection{Performance-Fairness Product (PFP)}

For a given multi-programmed experiment, we compute the slowdown for each program as:

$$Slowdown_i = \frac{MCPI_i^{shared}}{MCPI_i^{alone}}$$\\

Where $MCPI_i^{shared}$ is the execution time of program $i$ when running with other threads, $MCPI_i^{alone}$ is the execution time of program $i$ running alone on the same virtual machine.

Then we can define the Unfairness as 

$$Unfairness = \max_i Slowdown_i$$\\

There's also some other ways to define unfairness, some may use $\frac{\max_i Slowdown_i}{\min_i Slowdown_i}$. But we think it is not more fair when the minimal slowdown rised and maximal slowdown keeps the same.\\

For each experiment, the PFP is measured as $PFP = Unfairness \cdot ExecutionTime$. And then, we calculate the overall PFP as average of PFP among all 14 multi-programed simulations.\\

In Figure.\ref{fairfig}, we shows the promotion of PFP compared with ``close''. It's about 1.7\% better than ``close'' overall.

\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{fig3.png}
\end{center}
\caption{PFP promotion}
\label{fairfig}
\end{figure}

The detailed data is listed in Table.\ref{datatable}.

\begin{table*}[h]
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Workload & Config & \multicolumn{2}{|c|}{Performance (10 M cyc)} & \multicolumn{2}{|c|}{Max slowdown} & \multicolumn{2}{|c|}{EDP (J.s)} \\
         &        & Close & Proposed & Close & Proposed & Close & Proposed \\
\hline
\hline
MT-canneal                 & 1 chan   & 404   & 402     & -           &           & 3.98   & 3.94    \\
MT-canneal                 & 4 chan   & 167   & 166     & -           &           & 1.56   & 1.55    \\
bl-bl-fr-fr                & 1 chan   & 147   & 145     & 1.190962    & 1.175131  & 0.48   & 0.47    \\
bl-bl-fr-fr                & 4 chan   &  76   & 75      & 1.102521    & 1.09873   & 0.32   & 0.32    \\
c1-c1                      & 1 chan   &  83   & 82      & 1.121065    & 1.113522  & 0.40   & 0.40    \\
c1-c1                      & 4 chan   &  46   & 46      & 1.059009    & 1.052488  & 0.36   & 0.36    \\
c1-c1-c2-c2                & 1 chan   & 236   & 231     & 1.474806    & 1.454915  & 1.44   & 1.41    \\
c1-c1-c2-c2                & 4 chan   & 118   & 117     & 1.230654    & 1.221257  & 0.85   & 0.84    \\
c2                         & 1 chan   &  43   & 43      & -           &           & 1.37   & 0.37    \\
c2                         & 4 chan   &  27   & 27      & -           &           & 0.39   & 0.39    \\
fa-fa-fe-fe                & 1 chan   & 224   & 222     & 1.516633    & 1.510781  & 1.14   & 1.13    \\
fa-fa-fe-fe                & 4 chan   &  99   & 99      & 1.195728    & 1.194576  & 0.56   & 0.56    \\
fl-sw-c2-c2                & 1 chan   & 244   & 240     & 1.469448    & 1.439951  & 1.44   & 1.39    \\
fl-sw-c2-c2                & 4 chan   & 121   & 120     & 1.169184    & 1.166886  & 0.83   & 0.83    \\
st-st-st-st                & 1 chan   & 159   & 159     & 1.263702    & 1.260529  & 0.56   & 0.56    \\
st-st-st-st                & 4 chan   &  81   & 80      & 1.144699    & 1.141237  & 0.35   & 0.35    \\
fl-fl-sw-sw-c2-c2-fe-fe    & 4 chan   & 279   & 277     & 1.460693    & 1.455339  & 1.88   & 1.86    \\
fl-fl-sw-sw-c2-c2-fe-fe-   & 4 chan   & 620   & 615     & 2.012815    & 1.996878  & 4.76   & 4.70    \\
-bl-bl-fr-fr-c1-c1-st-st   &          &       &         &             &           &        &         \\ \hline
Overall                    &          & 3173  & 3147    & 1.315137	  & 1.305873  & 21.70  & 21.42   \\
                           &          &       &         & PFP: 3801   & PFP: 3734 &        &         \\
\hline
\end{tabular}
\caption{Comparison of key metrics on baseline and proposed schedulers.
c1 and c2 represent commercial transaction-processing workloads, MT-canneal is a
4-threaded version of canneal, and the rest are single-threaded PARSEC programs.
``Close'' represents an opportunistic close-page policy that precharges inactive
banks during idle cycles.}
\label{datatable}
\end{center}
\normalsize
\end{table*}





