\section{Solution method}
\label{sec:solution}

The optimization problem is solved using a optimization routine which we wrote ourselves. This algorithm is supplied with this report.

\subsection{Algorithm}

The algorithm implemented is Powell's Classical SQP algorithm, where SQP stands for Sequential Quadratic Programming. This means that in each step a constrained quadratic program is solved. The constraints are first linearized in the current iterate $x_k$. Then the KKT-system is solved to obtain the step direction $p_k$ and the lagrange multipliers $\lambda$ and $\mu$. This is done using the Matlab function \emph{quadprog}. The approximation for the next Hessian $B_{k+1}$ is computed by a BFGS update formula, using the lagrange gradient $  \Delta_{x} \mathcal{L}(x,\lambda,\mu)$. For this BGFS update Powell's trick is used to make sure that the new approximation of the Hessian remains positive definite. 

Globalisation is achieved by using line search with Armijo backtracking of a $L_{1}-$merit function that measures the progress in both objective and equality constraints. This merit function is formulated as $T_{1} = f(x) + \sigma ||g(x)||_{1}$, with $\sigma \ge ||\lambda||_{\infty}$. 

\subsection{Stopping criterion}

The stopping criterion is implemented as a tolerance on $||\nabla_x \mathcal{L}(x_k,\lambda_k,\mu_k)||$ and $||g(x_k)||$ which must both be zero in the solution. Since convergence is fast close to the solution, setting a tighter tolerance generally doesn't greatly influence the number of iterations required. For example, making the tolerance ten times smaller will result in only five extra iterations.

\subsection{Derivatives}

The derivatives needed in the algorithm are computed using the 'MATLAB imaginary trick'. This stems from the following observation: If $f: \mathbb{R}^{n} \to \mathbb{R}$ is analytic, then for $t= 10^{-100}$ we have 
\[ \Delta f(x)^{T}p = \frac{\Im(f(x+itp))}{t}  \] 
\noindent The main advantage of using this trick is that the derivatives are computed up to machine precision. Whereas when one would use a finite differences approach with $t=\sqrt{\epsilon_{mach}}$ the accuracy of the derivative would only be $\sqrt{\epsilon_{mach}}$. A disadvantage is that functions that would remove the imaginary part such as \emph{norm} cannot be used.

\subsection{Convergence}

The local convergence of the unconstrained problem is Q-superlinear using a Quasi Newton method. This also holds for the constrained problem, as can be seen in figure \ref{fig:convergence}. This figure shows results for an experiment with equality constraints as well as inequality constraints. Figure \ref{fig:error} shows the error of the positions at iteration step $k$ with respect to the final position obtained by the algorithm. Figure \ref{fig:convergencesub} shows the plot of 
\[ \frac{|| x_{k+1} - \bar{x} ||}{|| x_{k} - \bar{x} ||} \] \noindent with $\bar{x}$ the final position of the cloth. From this figure it is clear that 
\[
	|| x_{k+1} - \bar{x} || \le C_k || x_{k} - \bar{x} || \hspace{0.6cm} \text{with} \hspace{0.6cm} C_k\to 0 \hspace{0.6cm} \text{close to $\bar{x}$} 
\] \noindent And also 
\[
	\limsup_{k \to \infty} \frac{|| x_{k+1} - \bar{x} ||}{|| x_{k} - \bar{x} ||} < 1
\]

\subsection{Testing the algorithm}

The solutions obtained by our own solver are compared with the solutions computed by the Matlab function \emph{fmincon} to check the correctness. \emph{fmincon} is a solver implemented in Matlab to compute the minimum of a constrained nonlinear multivariable function. The results assure us that our solver works correctly.

