\section{Conclusions and Future Work}\label{sec:conclusion}

The best instances within the family of models we explore permit a number of interesting high-level speculations.

For instance, the simplified model can be used to identify the Pareto frontier presented in \refsec{subsec:basicmodel} for various objectives.
For the objective of maximizing throughput, the model suggests scaling up all cores to the highest frequency of any core.
This strategy optimizes throughput without increasing power consumption.
For the objective of minimizing power consumption, the model suggests scaling up or down the cores to the average frequency among the cores.
Unlike existing single core power models, our simple model indicates that scaling down cores that already run at lower than the highest speed cannot reduce power.

Furthermore, our power models can be used to analyze and quantify the power characteristics of applications and hardware architectures.
As shown in \reftab{tab:basicmodelparameter}, benchmark \textcode{447.dealII} has smaller model parameters than benchmark \textcode{482.sphinx3}. %''s speed factor for $avg(F)$ and  $\Delta^{max}_{avg}F$ are 7.495649 and 7.530063. But s corresponding factors are 25.254185 and 25.097482.
Consequently, the same frequency drop leads to a larger power reduction for benchmark \textcode{447.dealII} than benchmark \textcode{482.sphinx3}.

More generally, our models suggest increasing core speed for an application if its power consumption is insensitive to core speed.
Doing so significantly improves throughput without increasing power.
By contrast, the models suggest decreasing the difference in frequencies between cores for applications when their power consumption dramatically changes with core speed.
Doing so significantly reduces the total power while maintaining throughput.

We observe that the power characteristics for a given application are similar across multiple hardware architectures.
This observation suggests that power is an inherent characteristic of the workload, independent of the hardware platforms.
We also observe that different benchmarks draw different amounts of power on different platforms, indicating there exists an energy-efficient architecture for different workloads.

Our experiments show that the power model coefficients decrease with the generations of architectures and platforms across the benchmarks.
For example, the average values of $a_1$ and $a_3$ over all the benchmarks decrease from 13.2 to 9.8 to 4.9 and 4.6 on Nehalem,  Sandy Bridge, Ivy Bridge and Haswell.
This trend suggests that the newer generations are indeed more energy-efficient.

However, these specific implications may not hold on future architectures.
For this reason, we emphasize that our work really describes a family of models, which we explored systematically to derive a few good models.
Should architectures or applications change dramatically, one can start by carrying out the same systematic process within the family of models we considered, and deriving more appropriate models to reflect these changes.

One significant limitation of the current model is that it needs to be re-trained for every new workload and hardware platform.
We are developing techniques that can avoid the retraining problem.
For instance, we imagine a more first-principles modeling approach, akin to some of our own related work~\cite{Choi:2013er,Czechowski:2013fk}, could be combined with the approach of this paper to get both accurate and more principled, and even more informative models.
