%%
%% optimization.tex
%%
\newpage
\section{Optimization}

\subsection{Root finding}
A \emph{root} is the solution to an equation. Formerly, it means to find any point $\mathrm{x}$ at which the function $f(x)=0$. For example, the only root for the following equation is at $x=2$.

\begin{equation*}
    f(x) = 2x -4 =0,
\end{equation*}

\begin{sagesilent}
f(x) = 2*x-4
pt = point((2,0),rgbcolor='white',pointsize=90,faceted=True) 
fplot = plot(f, 0, 5)
\end{sagesilent}

\sageplot[width=8cm]{
plot(fplot+pt, fontsize=16)
}

However, there are equations that can have several roots.
\begin{equation*}
    f(x) = x^2 +5x +4 =0,
\end{equation*}

\begin{sagesilent}
f(x) = x^2 + 5*x + 4
pt1 = point((-1,0), rgbcolor='white',pointsize=90,faceted=True) 
pt2 = point((-4,0), rgbcolor='white',pointsize=90,faceted=True) 
fplot = plot(f, -5, 5)

\end{sagesilent}
\sageplot[width=8cm]{
plot(pt1+pt2+fplot, ymax= 20, fontsize=16)
}

Root finding is straighforward for polynomial funcions of order 2, because we can obtain its root by the quadratic formula:

\begin{equation*}
x=\frac{-b \pm \sqrt {b^2-4ac}}{2a},
\end{equation*}

However, polynomial function of higher orders are much more difficult to obtain. To find the roots of complex polynomial expressions and other functions we use optimization methods.

\subsection{Critical points}
A critical point in a function is any point where the derivative of the function is zero $f'(x) =0$. We can have 3 types of critical points:

\begin{enumerate}
    \item Local maximum: it is the point which is higher than its neighboour
    \item Local minimum: it is the point which is lower than its neighboours
    \item Saddle point: it is the point where the derivative is zero, but it is neither a maximum nor a minium. Generally a point to the left is lower and a point to the right is higher or viceversa. In one dimension, this is the point where the function changes from concave to convex.

\end{enumerate}

To averiguate where are the critical points of the function in black, we first obtain the derivative of the function (red curve) and see where where the values equal zero. After that, we compute the second derivative (green in the plot). If the second derivative is possitive, we have a minimun. If the second derivative is negative, we have a maximum. If the second erivative is equal to zero, we have a saddle point.

Generally, if we do not set an interval for the function, we can obtain a global maximum at the value of $f(x)$ and a global minimum at the minimum value of $f(x)$.
\subsection{Critical points in Sage}
For example, in the following polynomial function:
\begin{equation*}
f(x) = x^3 -x,
\end{equation*}

\sageplot[width=8cm]{
plot(x^3-x, fontsize=16)
}

We must find first the points at which the first derivative is zero.

\begin{sageblock}
f(x) = x^3 -x
dfdx = f.diff()
sols = solve(dfdx==0,x, solution_dict=True)
\end{sageblock}

And we obtain the points at $x=\sage{sols[0][x]}$ and $x=\sage{sols[1][x]}$ . To see if the solutions are a minimun of a maximum we check its value on the second derivative of the function:

\begin{sageblock}
d2fdx2 = f.diff(2)
for i in range(len(sols)):
    if d2fdx2(sols[i][x])< 0:
        print('f(x) has a maximum')
    else:
        print('f(x) has a minimum')
    
\end{sageblock}

We can represent graphically the functions and its first (in red) and second (blue) derivatives.
\begin{sagesilent}
fplot = plot(f, -1,1, rgbcolor='black')
dfdxplot = plot(dfdx,-1,1, rgbcolor='red')
df2dx2plot = plot(d2fdx2,-1,1, rgbcolor='blue')
\end{sagesilent}

\sageplot[width=8cm]{
plot(fplot+dfdxplot+df2dx2plot, ymax=3.5, ymin=-3.5,fontsize=16)
}


\subsection{Optimization}
Optimization aims to find the best element from a choice of alternatives. In particular, it means to find the value in a function that fullfits a given condition among all the independent variables. This generaly involves to add a condition to an equation. For example, if we want to know at which point the function $f(x) = x^2$ takes the value one, we should re-type the function as $x^2 = 1$ and formulate the new function as for example $g(x) = x^2 -1 =0$. For that reason, optimization usually means finding roots of of different functions.

\subsection{The Newton-Raphson method}
This is a method for finding roots for any continuous functions that involves repeated iteration. The method starts with an initial guess $x_0$ and repeatedly computes better guesses that are closer to the root of the function.

The idea behind the Newton-Raphson method is to calculate the tangent line at the point $x_0$ and then compute the value at which this line zeroes (cross with $x=0$ or the coordinate axis). This new value is then taken to compute again a new tangent to the function.  that crosses with $x=0$ . In the end, the slope and the function converges in the value $x=0$. 

From the Taylor series we know: 

\begin{equation*}
f(x) \approx f(x_0)+ f'(x_0)(x-x_0)  
\end{equation*}

If we simply take the first 2 terms of the series:

Because we want to find a root, we would need $f(x)=0$
\begin{equation*}
0 \approx f(x_0) +f'(x_0)(x-x_0)
\end{equation*}

and calculate the value of $x$ that satisfices the equation 
%\begin{equation*}
\begin{eqnarray*}
0 \approx f(x_0) + f'(x_0)x - f'(x_0)x_0  \\
f'(x_0)x \approx f'(x_0)x_0 -f(x_0) \\
\end{eqnarray*}
%\end{equation*}

this gives:
\begin{equation*}
x \approx \frac{\cancel{f'(x_0)}}{\cancel{f'(x_0)}}x_0 -\frac{f(x_0)}{f'(x_0)}
\end{equation*}

which results in 
\begin{equation*}
x \approx x_0 -\frac{f(x_0)}{f'(x_0)}
\end{equation*}

In general, we update the $x$ values obtained in this equation to perform sucessive approximations until 
This simply means that the function $f(x)$ has a root at the value $x_0$ where the function $f(x_0)$ and its derivative are will be the same 

The Newton-Raphson method can be substituted by the secant method if we do not know the analitical derivative of the function. In this case, the approximation is to simply use the slope of a line through 2 nearby points on the function. 

A problem arises if $f'(x_0)=0$ because the formula gives a zero divisor. Additionally, there is a risk to take an initial value that is far away from the root, or near a local minimum, because this will sent away the approximation. Finally, if we have and inflection point, the iteration can cycle and never converge.

\subsection{Example}
The Newton-Raphson algorithm is an interactive procedure that is used to calculate the roots of a continuous functions. We need first to approximate to the function of interest around some initial parameter. Next, we calculate the tangent line to our function in this first initial parameter.
\begin{equation*}
y = 
\end{equation*}
parameter is adjusted to maximize

To calculate the square root of a number we need to optimize the following equation:

\begin{equation*}
    f(x) = x^2 -k = 0,
\end{equation*}
