\subsection{Design}
\label{Design}
Finding a solution that satisfies the optimized version of the \textsc{WeightedIntervalCover} problem is clearly a non-trivial task, since as we prove in section \ref{NPcompleteness}, \textsc{WeightedIntervalCover} is a NP-complete problem.

The first algorithm that we have designed has a \textit{greedy} approach. Namely, the algorithm performs the following steps:
\begin{enumerate}
 \item the algorithm creates a local copy of the sequence of $y_i$ that it can use and modify.
 \item \label{scan}it scans the sequence and selects the minimum\footnote{We have also coded the complementary to this algorithm, namely the one that chooses the maximum number first. The results produced are the same. The code of this other algorithm is located in the file \texttt{SolutionGenerator.java}, method \texttt{generateSolutionMax()}} number (to be more precise, it chooses the \textit{first occurrence} of the minimum number). If the minimum number is $0$ it stops.
 \item it creates an interval of length 1 that covers the minimum number (that is, if the minimum number is for instance $y_{k}$, the interval would be $[k,k+1]$). The weight of this interval is the minimum number itself.
 \item the algorithm extends the interval in both directions as much as it can. In the first iteration, obviously, the interval covers the entire sequence.
 \item for each element covered by the interval, the algorithm substitutes it with its value minus the weight of the interval. Then it goes back to step \ref{scan}.
\end{enumerate}

We tested the algorithm\footnote{The code of the algorithm is located in the file \texttt{SolutionGenerator.java}, method \texttt{generateSolutionMin()}} on some examples and it performed quite well. As a matter of fact the solutions produced by this algorithm satisfy the requirements in almost all the tests provided along with this project. Only in four cases the solution produced had a \textit{K} that was bigger than the one requested.

We tried to understand the reasons behind these quite satisfactory performances and while we were doing so we came up with some observations:
\begin{itemize}
 \item When a number of the sequence is ``reduced'' to $0$, meaning that it has been fully covered by enough intervals such that the sum of their weight is equal to the number itself, it acts as a sort of ``barrier''. By saying this we mean that a $0$ in the sequence forbids the intervals that are covering numbers at the left of the $0$ to be extended to possibly cover those at its right\footnote{obviously we are excluding the useless option of having intervals of weight $0$}. The same observation can be done when we consider the interval's weight as being a certain value x and we cannot extend the interval's length when we ``encounter'' a number $n<x$. Small weights are more likely to cover a larger set of numbers and this property is interesting when associated to the next observation.

\item The second observation is that we can save some intervals if we succeed in fully covering ``equal'' numbers with one interval. If we have a sequence of three 2s we can cover all of them with only one interval of weight 2.
\end{itemize}

Keeping in mind these observations we tried to design a new algorithm in order to improve the performances of the previous one. To do so, we decide to stick to the following concepts:
\begin{itemize}
 \item numbers that are present in the sequence more than one time are more important than those that are present only one time. In fact, as we explained before, the former ones can help us in saving intervals.
 \item We should try to cover the small numbers first. An interval with a small weight
\end{itemize}

The new algorithm operates as such
\begin{enumerate}
 \item it creates a local copy of the sequence of $y_i$ that it can use and modify.
 \item \label{scan2} it scans the sequence and for each distinct number it registers the number of occurrences that it has in the sequence (we will call this the \textit{multiplicity} of the number.
 \item Using the results from the scan performed in the previous step the algorithm creates a priority queue for the numbers using the following ordering to set the priority:
	\begin{itemize}
	\item if two numbers have both multiplicity 1, they are equal
	\item if a number as multiplicity greater than 1 and the second has multiplicity equal to one, the former has higher priority than the latter
	\item if two numbers have both multiplicity greater than one, the smallest number has higher priority (clearly they cannot be equal)
	\end{itemize}
 \item The algorithm picks the first number in the queue and tries to create intervals that cover \textit{at least} two occurrences of this number. for this two happen all the number between the two occurrences must be greater than the number itself (otherwise we cannot create the interval). If no interval with such property is found for the number in consideration, the algorithm picks the next number in the queue, and it continues doing so until a set with cardinality greater than 0 of intervals is found.
 \item The algorithm updates the sequence of $y_i$s by subtracting to the number covered by the intervals found the weight of the number considered.
 \item Since the data has changed the algorithm goes back to step \ref{scan2}.
\end{enumerate}

Unless all the numbers in the sequence have multiplicity greater than one, after some iterations the execution will reach one of the following states:
\begin{itemize}
 \item The queue contains only numbers with multiplicity 1.
 \item The queue contains numbers with multiplicity greater of 1 but it is not possible to find intervals that cover at least two occurrences of them. The queue contains also zero or more numbers with multiplicity 1.
 \item The queue contains no numbers (every number was successfully considered).
\end{itemize}
In the last case the algorithm can terminate and return the set of intervals.

In the first and in the second case the only thing we can do is to run the old algorithm on the data to find the last intervals, since we cannot use the improvements anymore.

This improved implementation\footnote{The code of the algorithm is located in the file \texttt{SolutionGenerator.java}, method \texttt{equalNumbersFirst()}} gives some benefits, since in average we obtain slight improvements respect to the performances of the first algorithm.

In the following table we can see some examples that quantify the improvements:

\begin{center}
% use packages: array
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
%
\begin{tabular}{|l|l|l|}\hline
\mc{1}{|c|}{\bfseries Input File} & \mc{1}{|c|}{\bfseries K using \texttt{generateSolutionMin()}} & \mc{1}{|c|}{\bfseries K using \texttt{equalNumbersFirst()}} \\ \hline
\mc{1}{|c|}{test04.WIC} & \mc{1}{|c|}{318} & \mc{1}{|c|}{309} \\ \hline
\mc{1}{|c|}{test05.WIC} & \mc{1}{|c|}{319} & \mc{1}{|c|}{314} \\ \hline
\mc{1}{|c|}{test06.WIC} & \mc{1}{|c|}{244} & \mc{1}{|c|}{237} \\ \hline
\mc{1}{|c|}{test07.WIC} & \mc{1}{|c|}{185} & \mc{1}{|c|}{181} \\ \hline
\mc{1}{|c|}{test09.WIC} & \mc{1}{|c|}{90} & \mc{1}{|c|}{89} \\ \hline
\mc{1}{|c|}{test12.WIC} & \mc{1}{|c|}{186} & \mc{1}{|c|}{185} \\ \hline
\mc{1}{|c|}{test18.WIC} & \mc{1}{|c|}{149} & \mc{1}{|c|}{148} \\ \hline
\mc{1}{|c|}{test24.WIC} & \mc{1}{|c|}{79} & \mc{1}{|c|}{78} \\ \hline
\mc{1}{|c|}{test25.WIC} & \mc{1}{|c|}{25} & \mc{1}{|c|}{24} \\ \hline
\mc{1}{|c|}{test27.WIC} & \mc{1}{|c|}{61} & \mc{1}{|c|}{60} \\ \hline
\mc{1}{|c|}{test28.WIC} & \mc{1}{|c|}{86} & \mc{1}{|c|}{85} \\ \hline
\mc{1}{|c|}{test29.WIC} & \mc{1}{|c|}{47} & \mc{1}{|c|}{46} \\ \hline
\end{tabular}
\end{center}

The default algorithm that is used by our program is the second one. Anyway it is possible to select which algorithm to use by passing some arguments in the command line.

\subsection{Implementation}
The code for the first algorithm is given in the sequel:

\lgrindfile{source/min.tex}

It is rather simple and it works as described in the previous section.

\vspace{1cm}

The code for the second algorithm instead is given below:

\lgrindfile{source/equals.tex}

The interesting part is the one contained in the while loop.
\\It consists of three blocks:
\begin{description}
 \item[block ``case 0''] In this block the algorithm analyzes the data and creates the priority queue. The main work is done by the function \texttt{findMultiplicities()}\footnote{The code of the algorithm is located in the file \texttt{SolutionGenerator.java}, method \texttt{findMultiplicities()}}, that scans the input sequence and finds the multiplicities for each distinct number. The code for this method can be seen below:

\lgrindfile{source/findMultiplicities.tex}

 \item[block ``case 1''] In this block the algorithm extracts the element from the queue and it tries to find the intervals that contain at least two occurrences of the numbers with multiplicity greater than 1. Here, the main work is done by the function \texttt{findIntervals()}\footnote{The code of the algorithm is located in the file \texttt{SolutionGenerator.java}, method \texttt{findIntervals()}}, whose code is given below:

\lgrindfile{source/findIntervals.tex}

 \item[block ``case 2''] If we reach this block of code it means that the sequence contains numbers with multiplicity 1 and/or numbers that have multiplicity greater than 1 but for which there are no intervals that cover at least two occurrences of them. Thus, we run \texttt{generateSolutionMin()} on the remaining sequence.
 \end{description}


\subsection{Time Analysis}

First of all we must consider the running time of \texttt{generateSolutionMin()}.

The worst case scenario is represented by a sequence of length $n$ of all distinct numbers. The sequence is ordered in decreasing order.

Now:
\begin{itemize}
 \item Finding the minimum number requires $n$ comparisons (step 2 of the algorithm, described in section \ref{Design}). We execute the algorithm $n$ times, thus the running time is quadratic, $O(n^2)$
 \item Extending the interval requires $n$ comparisons in the first execution, $n-1$ in the second and so on. The total cost is
\begin{center}
\begin{displaymath}
\sum_{i=1}^{n}i = \frac{n(n+1)}{2} = O(n^2)
\end{displaymath}
\end{center}
\end{itemize}

We can conclude that the total running time of the algorithm is quadratic.
\begin{center}
\begin{displaymath}
T_w^{\texttt{generateSolutionMin()}} = O(n^2)
\end{displaymath}
\end{center}

\vspace{1cm}

Imagining the worst case scenario for \texttt{equalNumbersFirst()} is slightly more complicated.

Let's see, first of all, the complexity of the two helper functions.

\begin{itemize}
 \item \texttt{findMultiplicities()} simply scans the input. Assuming that the amount of time to create/update the hash table is constant, the running time is $O(n)$.
 \item \texttt{findIntervals()} starts scanning the input sequence from the first occurrence of the $y$ number given as argument from the left. Each number in the sequence is visited at most one time. Therefore the running time is $O(n)$.
\end{itemize}

Let's say that we have $k$ distinct numbers, each of which has multiplicity 2. Therefore, our sequence is made of $n = 2k$ numbers.

Moreover, in the worst case scenario our sequence is made of the concatenation of the two identical sequence of numbers ordered in increasing order.

For example if we consider the numbers $3,54,56,7,32$ our sequence would be:
\begin{center}
 $3,7,32,54,56,3,7,32,54,56$
\end{center}

These would be the steps followed by \texttt{equalNumbersFirst()}:
\begin{itemize}
 \item \label{construct} the algorithm first runs \texttt{findMultiplicities()} to find the multiplicities and then build the priority queue up. The first operation requires $O(n)$ time. The second one requires $O( k\log(k) )$ time in the worst case (we have k distinct numbers) \footnote{Using Bottom-Up construction of a heap requires instead $O(n)$ time}. We can assume that $O(k\log(k)) \simeq O(n)$
 \item The algorithm extracts the head of the queue, which is represented by the minimum number, then it runs \texttt{findIntervals()} ($O(n)$), updates the data set ($O(n)$) and goes back to the step \ref{construct} to re-analyze the data ($O(n)$). Total complexity: $O(3n) = O(n)$.
 \item Now the algorithm extracts the items from the queue for $k-1$ times and in each trial the method \texttt{findIntervals()} would return an empty set of intervals ($O( (k-1)n ) = O(n)$)
 \item After the unsuccessful extractions the algorithm passes the modified sequence to \texttt{generateSolutionMin()} that computes the remaining intervals in $O(n^2)$ time.
\end{itemize}

 Clearly the running time of \texttt{equalNumbersFirst()} is dominated by the running time of \texttt{generateSolutionMin()} hence:

\begin{center}
\begin{displaymath}
T_w^{\texttt{equalNumbersFirst()}} = O(n^2)
\end{displaymath}
\end{center}





