\section{Discussion and Conclusions}\label{sec:conclusion}

We firmly believe that it has nowadays become crucial to investigate new models and techniques to schedule contemporary applications subject to real-time requirements, especially given the new trend towards parallelization to achieve higher performance. The theoretical results and investigations carried out in this work should be seen as a step in that direction. Some of the practical concerns and implementation details have been set aside in this work, but will be the focus of our future work. 

%The results presented in this paper have not considered any implementation concern whatsoever and we do not claim that the proposed approach is the most promising solution to cope with real-time parallel tasks on multi-core platforms, neither it is better than any other already-published solution. However, we firmly believe that nowadays it becomes crucial to investigate new models and runtime techniques to schedule contemporary application subject to real-time requirements, especially given the new trend towards parallelization to achieve higher performance. 

%Besides, our approach benefits from the strong points of server-based techniques and servers have become a standard implementation to guarantee a symmetric temporal isolation between tasks~\cite{Abeni:2004, Lin:06} and are also used as fault containers to limit the propagation of errors in case of faulty components.

We have presented a technique to compute a single DAG of servers $\GSSG_i$ for a task $\Task_i$ with different execution flows, and showed that these servers are able to supply every execution flow of $\Task_i$ with the required cpu-budget so that the task can execute entirely, irrespective of the execution flow taken at run-time. Therefore, the multi-DAG parameter $\MultiDAG_i$ assumed in the task model can be replaced for its corresponding GSSG $\GSSG_i$, while the period and the deadline remain unchanged. 
With this, there is no need to consider every feasible interference scenario between all combinations of execution flows of all the tasks in order to derive a schedulability test based on the internal structure of the tasks, as $\GSSG_i$ naturally upper-bounds the on-core interference that a task $\Task_i$ causes on the other tasks. Moreover, a GSSG is a special case of the synchronous parallel task model, which in turn is a special case of the DAG model. Therefore, existing multi-core scheduling techniques for all these classes of parallel task models can be leveraged to ascertain the schedulability of a task set modeled as discussed in this work. Besides, note that a GSSG has a very peculiar structure (fair progression and synchronous behavior), which may allow for improved schedulability tests.
%In other words, this work facilitates in reducing the the number of interference scenario to be considered for deriving the exact schedulability test by orders of magnitude.
%This transition lifts the intractable problem of having to consider every feasible interference scenario between all combinations of execution flows of all the tasks in order to derive an exact schedulability test, as $\GSSG_i$ naturally upper-bounds the on-core interference that a task $\Task_i$ imputes on the other tasks. 

From a schedulability point of view, current scheduling techniques for parallel tasks can be broadly categorized into two categories: decomposition method and direct analysis. In decomposition method, each sub-task of a DAG is assigned an intermediate offset and a deadline based on the structure of the DAG. With this, each sub-task can be treated as an individual sequential task. The parallel task scheduling problem then reduces to the traditional sequential task schedulability problem on a multiprocessor system, for which there is a plethora of scheduling algorithms and schedulability tests in the literature. In direct analysis, schedulability conditions are derived directly from the properties of the DAG. Some analysis techniques consider the precedence constraints on the DAG to study the execution requirements at different time instants, whereas others simply rely on the workload and critical path length values to create a synthetic worst-case scenario that upper-bounds the interference. For the latter case, our contribution brings no benefit since we end up increasing the maximum workload of the task. However, it has been shown in \cite{qamhieh13} that considering the internal structure of a DAG (as we do in this work) may improve the schedulability tests. Hence, for all the other cases which rely on the internal structure of the DAG (including the decomposition methods) our contribution directly enables the application of such schedulability analysis methods to generic real-time parallel applications with conditional execution without having to assume that all the sub-tasks of every flow must execute.

From an optimization viewpoint, it is worth noting that Algorithm~\ref{algo:createGSSG} is a very simple algorithm that merges a collection of valid SSGs into a single valid GSSG. We chose to present this algorithm in its simplest form for ease of understanding and proving the validity of its output. However, it can be seen that the algorithm can be further improved, particularly with respect to tightening of the GSSG's workload. For example, at each iteration of the while loop, if there is an SSG in $L$, say $\SSG_{i,j}$, for which the remaining workload is lesser or equal to the remaining critical path length of the SSG with the longest critical path, say $\SSG_{i,k}$, then the workload of $\SSG_{i,j}$ can be entirely executed (even in a sequential manner) within the servers that will be created in the next iterations to accommodate the remaining sub-tasks of $\SSG_{i,k}$. Therefore, $\SSG_{i,j}$ can safely be removed from $L$, which may reduce the resulting workload of the output GSSG if $\SSG_{i,j}$ contributed to $\MaxNbServersInSegment$ in a next iteration (for example).

As a final remark, it is also worth mentioning that there exists a trade-off between the critical path length and the workload of the GSSG output by Algorithm~\ref{algo:createGSSG}, in the sense that it is sometimes possible to reduce its workload by increasing its critical path length, and vice versa, while preserving its validity. Although the SSGs output by the algorithms presented in this paper retain optimal critical path length values, we provide no proofs due to space constraints. Our future work will also explore and try to exploit this trade-off in order to influence the interference between tasks by fine-tuning these two parameters and thereby reducing the worst-case response time of some tasks to improve the system schedulability. 



%As future work, we will conduct experiments with random execution flows to compare the tightness of different schedulability tests w.r.t the knowledge of their internal structure.  We also intend to analyze the schedulability of GSSGs with fixed priority scheduling algorithms as to improve existing schedulability tests.

%THIS IS A STRONG STATEMENT TO MAKE WITHOUT A PROOF
%Although $\max\limits_{j=1}^{n_i} \Workload(\SSG_{i,j}) = \Workload(\GSSG_i)$ cannot be achieved in the general case, more complex algorithms may further reduce the workload. 



