

To further characterize the problem we are facing, we need to look at
the bigger picture. We have some input points from the parameters, and
an output, the time spent by one particular program to run. And we can
compare a series of runs, with inlining, without inlining, using non-\FDO\
inlining.

With this starting point we have to define an error function and an algorithm
to search for the optimal point if it exists, or at least to get as close as
we can of it. But we don't really know any information about the space bounding
the function. We are to define and then minimize the error function on an
unknown space.

Any well known machine learning algorithm such as gradient descent
not necessarily will work properly in our environment because, as
we mentioned in ~\ref{inlining:candidate}, the function to be optimized
is not known to be differentiable nor convex. And the space that bounds
the function is also unknown.

One possible approach to this problem is to use other kinds of algorithms,
such as, Simulated Annealing \cite{Zhong2009}, or SPSA - Simultaneous Perturbation Stochastic
Approximation \cite{Spall1999,Spall2012}, because these algorithms have no
supposition on the function and on the space. They are both non-deterministic,
and make use of random points trying to avoid being trapped to a local minimum.

Another possibility is to apply an unsupervised learning method, like reinforcement
learning, and use its' returned policy to guide the possible changes (up, or down)
in the values of the parameters. This approach can be fully automatized and corresponds
to another research path.
