\section{Model Evaluation}\label{sec:evaluation}
\subsection{Experimental Setup}
\emph{Workloads.} We are interested in workloads whose execution time and power change with core speed, and thus disregard memory-intensive applications whose power or performance are relatively constant to processor core speeds. We chose compute-intensive benchmarks from SPEC2006~\cite{henning2006spec}.
Of the 29 SPEC benchmarks in this suite, we omitted one (\texttt{400.perlbench}) due to its long execution times.
In the experiments, we assigned a benchmark to each core.
We considered two assignments: assignment ``$SW$,'' where the same benchmark is assigned to all cores, and assignment ``$MW$,'' where we might assign different benchmarks to different cores ($MW$ assignments are specified in more detail below).

\emph{Hardware.} We did our experiments on four different generations of x86 microarchitectures (Nehalem, Sandy Bridge, Ivy Bridge, and Haswell), using a total of five different multicore processors. In \reftab{tab:platform}, $\# P$ denotes the number of processors and $\# C$ denotes the number of cores on each processor. We disabled Turbo Boost to prevent hardware-based processor speed scaling.

\emph{Speed Scaling and Core Affinity.} We used the Linux user-level \emph{cpufreq} interface to set core frequencies.%
  \footnote{For instance, to set core $i$ to frequency setting \texttt{Fre}, the \emph{cpufreq} interface allows one to do so on the command-line via, for instance, \textcode{echo Fre > /sys/devices/system/cpu/cpu$i$/cpufreq/scaling\_setspeed}.}
%This command schedules a user specified core speed $Fre \in FrequencySet$ on core $i$ $(0 \le i \le N-1)$.
We use Linux command \textcode{taskset} to bind a process to a physical core.%
  \footnote{To bind the launched process, \textcode{BenchName}, onto core $i$, and run it $k$ times, one can use \textcode{taskset -c $i$ runspec --config=My.cfg --action onlyrun --size=test --noreportable --iterations=$k$ BenchName}.}
%\scalebox{0.9}{$taskset$  -$c$ $i$ $runspec$ {-}{-}$config$=$My.cfg$  {-}{-}$action$ $onlyrun$ {-}{-}$size$= }\\
%\scalebox{0.9}{$test$ {-}{-}$noreportable$ {-}{-}$iterations$=$IterNum$ $BenchName$}\\
%binds the lunched process $BenchName$ onto given core $i$.

\emph{Power Measurement.} For all the quad-core processors in \reftab{tab:platform}, we used a clamp ammeter (meter) to measure power directly.
For the machine with dual octa-core Sandy Bridge processors, we used Intel's Running Average Power Limit (RAPL) interface~\cite{david2010rapl}.

\emph{Speed Range Selection.} If a processor has $N$ homogeneous cores and each core can be set as $K$ different frequencies independently, so the total number of speed configurations is ${N+K-1 \choose N}$. If $K=16$ and $N=4$ then ${N+K-1 \choose N}$=${19 \choose 4}=5168$. If one benchmark test takes 10 seconds then all the 28 benchmarks take $5168 \time 10 \times 28=1447040$ seconds (more than 400 hours). Thus, to keep the total experimental time manageable, we selected a smaller number of frequencies that still covers the full frequency range.
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%
\Large
\newsavebox{\boxplatform}
\begin{lrbox}{\boxplatform}
\larger
\begin{tabular}{|l|c|l|}
\hline
\multicolumn{2}{|c|}{Processors} & Available Frequencies \\
\cline{1-2}
Model Name&$ \#P\times \#C$& \multicolumn{1}{|c|}{}\\
\hline
\multirow{2}{*}{Core i7-975 (Nehalem)}  &  $1\times 4$ & 325000,  3192000,  3059000, 2926000, 2793000, 2660000, 2527000, \\
                                          && 2394000, 2261000, 2128000, 1995000, 1862000, 1729000, 1596000\\
\hline
\multirow{2}{*}{Core i7-2600k (Sandy Bridge)}  &  $1\times 4$ & 3400000, 3200000, 3000000, 2800000, 2600000, 2400000, 2200000, \\
  && 2000000, 1800000, 1600000\\
\hline
\multirow{2}{*}{Core i7-3770K (Ivy Bridge)}  &  $1\times 4$ & 3500000, 3400000, 3200000, 3100000, 3000000, 2900000, 2700000, \\
  && 2600000, 2500000, 2400000, 2200000, 2100000, 2000000, 1900000, \\
  && 1700000, 1600000\\
\hline
\multirow{2}{*}{Core i7-4770 (Haswell)}  &  $1\times 4$ & 3500000, 3300000, 3100000, 3000000, 2800000, 2600000, 2400000, \\
  && 2200000, 2100000, 1900000, 1700000, 1500000, 1300000, 1200000, \\
  && 1000000, 800000 \\
\hline
\multirow{2}{*}{Xeon E5-2670 (Sandy Bridge)}  &  $2\times 8$ & 2600000, 2500000, 2400000, 2300000, 2200000, 2100000, 2000000, \\
  && 1900000, 1800000, 1700000, 1600000, 1500000, 1400000, 1300000, \\
  && 1200000 \\
\hline
%\multirow{2}{*}{Ivy Bridge E5-2670 v2 }  &  $2\times 10$ & 2501000, 2500000, 2400000, 2300000, 2200000, 2100000, 2000000, 1900000, 1800000\\
%&&1700000,1600000, 1500000, 1400000, 1300000, 1200000\\
%\hline
\end{tabular}
\end{lrbox}
\normalsize
\begin{table}[htbp]
\caption{Four generations of microarchitectures and five multicore processors are used in the model evaluation}
\begin{center}
\label{tab:platform}
\scalebox{0.31}{\usebox{\boxplatform}}
\end{center}
\end{table}
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% Next Section
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%
\subsection{Model Accuracy}
We measure model accuracy by the relative error, $|\mbox{modeled} - \mbox{measured}| / \mbox{measured}$, which considers only the absolute value of the difference between modeled and measured values.
\RefTable{tab:sameworkload} shows the results of different candidate regression functions and variables for benchmark \textcode{410.bwaves} on the quad-core Ivy Bridge platform.
Note that this result is just a sample; we collected and analyzed a full data set covering all benchmarks and platforms.

The columns `Res$\le5\%$' and `Res$\le3\%$' mean portions of predicted results whose relative error is less than $5\%$ or $3\%$. The higher the value is, the more accurate the power model is. `Max$\%$' and `Avg$\%$' are the maximum and average values of all the relative errors.  `Max Res' and `Min Res' mean the maximum and minimum value of all the  errors. `Avg RelRes' means the average value of all the relative errors. %{\textbf{Please check if the revise can express the meanings clearly.}}

\RefTable{tab:sameworkload} shows that regression functions $R1\sim R5$ achieve very high prediction accuracy with variables \AvgFreq and $\DisparityMaxAvg$.  Among these five regression functions, $R1$ is the simplest. In addition, its average relative errors are as low as $0.45\%$ and the maximum relative errors are less than $4.2\%$. Replacing $\AvgFreq$ with any other disparity variable leads to higher prediction errors. Though not shown, results from other benchmarks are similar. These observations underlie our claim that model $R1$ using \DisparityMaxAvg is better than the alternatives we considered.

\RefTable{tab:sameworkload} also shows the inaccuracy of power models $R6$ and $R7$, which are the natural extensions of the canonical single-core power model. The maximum and average relative errors are $19\%$ and $4.6\%$ respectively, about 4.6$\times$ and 10$\times$ the errors by our proposed model. Only about $58\%$ of samples have an relative errors within $5\%$ with power models $R6$ and $R7$.

We have also tested these scenarios using mixed workloads, in both the case of four unique benchmarks and two unique benchmarks.%
  \footnote{For instance, ``two unique benchmarks'' on a quad-core processor means one benchmark running on two cores and the other benchmark running on the remaining two cores.}
The results are similar to those in \reftab{tab:sameworkload}.
Thus, the suitability of $R1$ does not appear to depend on the benchmark chosen in our test set.
%%%%%%%%%
%%%%%%%%%
\Large
\newsavebox{\boxsameworkload}
\begin{lrbox}{\boxsameworkload}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Regression Model} &\multirow{2}{*} {Res $\le$ 5\%} & \multirow{2}{*}{Res $\le$ 3\%} & \multirow{2}{*}{Max \%} &\multirow{2}{*}{ Avg \%}&\multirow{2}{*}{ Max Res} &\multirow{2}{*}{ Min Res}  &\multirow{2}{*}{ Avg RelRes} \\
\cline{1-2}
Variables & Reg Func & \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{}\\
\hline
\multirow{5}{*}{\AvgFreq, \DisparityMaxAvg} &R1 &  1.000 & 0.985 & 0.042 &  0.004 & 0.255 & -0.801 & 0.111\\
&R2 &  1.000 & 0.985 & 0.036 &  0.004 & 0.681 & -0.356 & 0.101\\
&R3 &  1.000 & 0.980 & 0.040 &  0.004 & 0.758 & -0.235 & 0.103\\
&R4 &  1.000 & 0.985 & 0.036 &  0.003 & 0.679 & -0.336 & 0.097\\
&R5 &  1.000 & 0.985 & 0.031 &  0.003 & 0.600 & -0.308 & 0.093\\
%&R1 &  1.000 & 0.980 & 0.040 &  0.004 & 0.758 & -0.235 & 0.103\\
%&R2 &  1.000 & 0.985 & 0.031 &  0.003 & 0.600 & -0.308 & 0.093\\
%&R3 &  1.000 & 0.985 & 0.036 &  0.004 & 0.681 & -0.356 & 0.101\\
%&R4 &  1.000 & 0.985 & 0.036 &  0.003 & 0.679 & -0.336 & 0.097\\
%&R5 &  1.000 & 0.985 & 0.042 &  0.004 & 0.255 & -0.801 & 0.111\\
\hline
\multirow{5}{*}{\AvgFreq, \DisparityMaxMin} &R1 &  0.942 & 0.714 & 0.078 &  0.022 & 1.445 & -1.840 & 0.577\\
&R2 &  0.942 & 0.700 & 0.069 &  0.022 & 1.795 & -1.500 & 0.576\\
&R3 &  0.942 & 0.714 & 0.074 &  0.022 & 1.802 & -1.466 & 0.575\\
&R4 &  0.942 & 0.714 & 0.067 &  0.022 & 1.762 & -1.514 & 0.574\\
&R5 &  0.557 & 0.328 & 0.240 &  0.054 & 3.534 & -4.547 & 1.372\\
%
%&R1 &  0.942 & 0.714 & 0.074 &  0.022 & 1.802 & -1.466 & 0.575\\
%&R2 &  0.557 & 0.328 & 0.240 &  0.054 & 3.534 & -4.547 & 1.372\\
%&R3 &  0.942 & 0.700 & 0.069 &  0.022 & 1.795 & -1.500 & 0.576\\
%&R4 &  0.942 & 0.714 & 0.067 &  0.022 & 1.762 & -1.514 & 0.574\\
%&R5 &  0.942 & 0.714 & 0.078 &  0.022 & 1.445 & -1.840 & 0.577\\
\hline
\multirow{5}{*}{\AvgFreq, \DisparityAvgMin} &R1 &  0.700 & 0.485 & 0.159 &  0.039 & 3.172 & -3.013 & 1.009\\
&R2 &  0.328 & 0.242 & 0.316 &  0.086 & 5.387 & -6.195 & 2.222\\
&R3 &  0.700 & 0.471 & 0.151 &  0.039 & 2.862 & -3.169 & 0.994\\
&R4 &  0.442 & 0.285 & 0.183 &  0.064 & 5.108 & -5.061 & 1.692\\
&R5 &  0.157 & 0.114 & 0.369 &  0.141 & 8.047 & -8.483 & 3.616\\
%&R1 &  0.700 & 0.471 & 0.151 &  0.039 & 2.862 & -3.169 & 0.994\\
%&R2 &  0.157 & 0.114 & 0.369 &  0.141 & 8.047 & -8.483 & 3.616\\
%&R3 &  0.328 & 0.242 & 0.316 &  0.086 & 5.387 & -6.195 & 2.222\\
%&R4 &  0.442 & 0.285 & 0.183 &  0.064 & 5.108 & -5.061 & 1.692\\
%&R5 &  0.700 & 0.485 & 0.159 &  0.039 & 3.172 & -3.013 & 1.009\\
\hline
%% [VUDUC] There is no reason to expect variance to give different results from standard deviation, so I commented it out
%\multirow{5}{*}{\AvgFreq, $\sigma^2$}&R1 &  0.914286 & 0.600000 & 0.126761 & 0.000043 & 0.028160 & 2.393365 & -1.467869 & 0.721168\\
%&R2 &  0.928571 & 0.642857 & 0.070055 & 0.000311 & 0.023619 & 1.675164 & -1.577545 & 0.614174\\
%&R3 &  0.928571 & 0.614286 & 0.068182 & 0.001910 & 0.024622 & 1.889801 & -1.421731 & 0.639949\\
%&R4 &  0.914286 & 0.600000 & 0.063247 & 0.000529 & 0.024709 & 1.589392 & -1.533578 & 0.639702\\
%&R5 &  0.900000 & 0.585714 & 0.132685 & 0.000384 & 0.028486 & 1.494850 & -2.505214 & 0.729693\\
%\hline
\multirow{5}{*}{\AvgFreq, $\sigma$}
&R1 &  0.957 & 0.657 & 0.080 &  0.024 & 1.307 & -1.964 & 0.637\\
&R2 &  0.928 & 0.642 & 0.067 &  0.024 & 1.880 & -1.407 & 0.639\\
&R3 &  0.957 & 0.671 & 0.081 &  0.024 & 1.948 & -1.300 & 0.637\\
&R4 &  0.928 & 0.671 & 0.066 &  0.024 & 1.847 & -1.413 & 0.635\\
&R5 &  0.728 & 0.400 & 0.143 &  0.042 & 3.376 & -3.271 & 1.086\\
%&R1 &  0.957 & 0.671 & 0.081 &  0.024 & 1.948 & -1.300 & 0.637\\
%&R2 &  0.728 & 0.400 & 0.143 &  0.042 & 3.376 & -3.271 & 1.086\\
%&R3 &  0.928 & 0.642 & 0.067 &  0.024 & 1.880 & -1.407 & 0.639\\
%&R4 &  0.928 & 0.671 & 0.066 &  0.024 & 1.847 & -1.413 & 0.635\\
%&R5 &  0.957 & 0.657 & 0.080 &  0.024 & 1.307 & -1.964 & 0.637\\
\hline
\multirow{2}{*}{$f_1,...,f_N$}&R6&  0.585 & 0.385 & 0.191 &  0.046 & 3.624 & -3.324 & 1.179\\
&R7 &  0.585 & 0.385 & 0.197 &  0.046 & 3.732 & -3.236 & 1.170\\
\hline
\end{tabular}
\end{lrbox}
\normalsize
\begin{table}[htbp!]
\caption{Comparison of different Regression models with single Benchmark \textcode{410.bwaves} as the workload.
%  \TODO{Do we really need so many digits of precision? This table is too small as-is, and fewer digits may help us enlarge it.}
}
%,our power model which use $R5$ as regression function and $avg$ $\Delta^{max}_{avg}$ as regression variables is the simplest one with high accuracy.
\begin{center}
\label{tab:sameworkload}
\scalebox{0.39}{\usebox{\boxsameworkload}}
\end{center}
\end{table}
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% Next Section
%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%
