\section*{Run 3}
\subsection*{Modules to decompose}
The module we are going to decompose in this run is the Main DB component. We also create a new level 2 component the Anomaly Detector and decompose it to level 3.

\subsection*{Choose architectural drivers}
\begin{itemize}
\item Av1: Measurement database failure
\item Av2: Missing Measurements
\item P2: Anomaly detection
\item P3: Requests to the measurement database
\item UC7: Send trame to remote device
\item UC8: Send Measurement
\item UC10: Detect anomaly
\item UC13: Send Alarm
\end{itemize}

We now choose to work with Av1 to implement the Main DB component. Since we already started with UC8 in the previous run it seems obvious to us to include P2 and UC10, because they both have to deal with anomalies. In the first run we said that both UC7 and UC13 need to work with the database. We will implement that in this run as well. UC8 deals with measurements, so we include Av2 in this ADD run as well.

\subsection*{Choose architectural patterns}

Availability tactics (Av1):
\begin{itemize}
\item Fault detection: Heartbeat to notify the ReMeS operator when a database fails (Av1)
\item Fault recovery: To us active redundancy seems the best solution to the fault recovery problem. The database must have an availability of 99.9%. 
%\item Fault prevention: Transactions make sure that only consistent data is being put in the database and it prevents collisions between multiple threads trying to access the same data.
\end{itemize}

%Availability tactics (Av2):
%\begin{itemize}
%\item Fault detection: A sort of Ping/Echo can be used to detect if a measurement is missing.
%\end{itemize}

Performance tactics (P2):
\begin{itemize}
\item Resource demand: Increasing the computational efficiency. If the efficiency to detect anomalies goes up, the performance goes up.
\item Resource management: Since one of the requirements was to balance the load over multiple instances of each sub-system. We needed some sort of load balancing tactic. For this, we've chosen to introduce concurrency.
\end{itemize}

%Beter formuleren
Performance tactics (P3):
\begin{itemize}
\item Resource management: We did not choose "maintain multiple copies of either data or computations" because requests to the database are not done multiple times for the same data most of the time. With this we mean that it is not useful that data is stored multiple times or in cache because it will not be requested multiple times. It is more important that multiple requests for different data can be handled simultaneously, hence we chose to introduce concurrency. This was also already chosen as a tactic for P2.
\item Resource arbitration: In normal modus, the database processes the incoming requests First-in, First-out (FIFO Scheduling). When the system is in overload modus, the requests need to be handled in order that the system can return to normal modus the fastest, but it must take the different request of different users with different priorities into account. Priorities is the keyword here. That's why a dynamic priority scheduling with earliest deadline first is chosen over a fixed priority scheduling in overload modus. This way, premium users get the advantage over normal users, but because we use dynamic scheduling we make sure that starvation is not a problem.
%Gebruik van de secifieke deadlines er misschien bijplaatsen. premium vs normal
\end{itemize}

Patterns

\begin{itemize}
\item Replicated component group (Av1): Some components in a system must meet high availability and fault tolerance requirements, like the measurement database. The key benefit of a Replicated Component Group is that it enhances the fault-tolerance of a component: as long as at least one of the component instances within the group is accessible, client requests can be serviced. The Replicated component group pattern is mainly used to provide high availability, not to provide the concurrency.
\item Business delegate (Av1): The business delegate pattern is used as the interface of the replicated component group pattern. If multiple instances of the component are deployed, the business delegate can perform load balancing before issuing a request to a specific component instance.
\item Observer (Av1): The observer pattern is used to make sure all the replica's in the replicated component group have the same data available.
\item Active Object (P2,P3): To provide concurrency the active object pattern can be used. We must often ensure that the operations of components can run concurrently within their own threads of control. Clients should be able to issue requests on components without blocking until the requests execute. It should also be possible to schedule the execution of client requests according to specific criteria, such as request priorities or deadlines.
\item Master Slave: In order to make the anomaly detection efficient. We use a master slave pattern. The master gathers the needed data and divides the task over multiple slaves. In order to make the resource creation as efficient as possible the slaves can be implemented as a resource pool.
\end{itemize}
%For the next run: message endpoint


\subsection*{Modules}
\begin{itemize}
\item Main DB
\begin{itemize}
\item Main DB Scheduler: This scheduler makes sure that when needed, more important requests to the database have a higher priority than other requests.
\item Main DB Interface: Is implemented as a business delegate pattern. This interface gives the customer and the system an interface to talk to the different database replica's. By using a business delegate, load balancing can be used for the different replica's. This interface also can check if a database is still operational. When this is not the case, the component can send an issue notification to a ReMeS operator.
\item Main DB Replica 1, 2 and 3: These are the databases and their replica's.
\end{itemize}
\item Anomaly Detector: This component will take care of the anomaly detection.
\begin{itemize}
\item Anomaly Detector Scheduler: This will schedule the anomaly detection according to the requirements in P2.
\item Anomaly Master: The master receives the scheduled data from the scheduler, based on the data that enters through this channel, it has the ability to read additional data it needs from the Main DB. If an anomaly was detected, the Anomaly Master can send an anomaly notification to the customer.
\item Anomaly Slave Pool: These are the slaves that are used by the Anomaly Master. The master divides the work over these slaves.
\end{itemize}
\item Missing Measurement Checker: systematically it checks the database for missing measurements. We request the latest measurement for each customer from the database and check the update interval time. If the component notices something is not right, it notifies a ReMeS operator.
\end{itemize}

\begin{figure}
  \centering
    \includegraphics[width=\textwidth, angle=90]{addRuns/ADDRun3.pdf}
  \caption{Run 3: Level 2 and level 3 decomposition of the ReMeS system}
  \label{addRun3}
\end{figure}

\subsection*{Define interfaces}

\begin{center}
    \begin{longtable}{ | p{4cm} | p{4cm} | p{7cm} |}
    \hline
    Component & Interface & Operations \\ \hline
    Main DB Scheduler & iMainDBScheduler & \begin{itemize} \item \textbf{readData(Query q)} \item \textbf{storeData(Query q)} \end{itemize}  \\ \hline
    Main DB Interface & iMainDBInterface & \begin{itemize} \item \textbf{addQuery(Query q)} \end{itemize}  \\ \hline
    Main DB Replica & iMainDBReplica & \begin{itemize} \item \textbf{addQuery(Query q)} \end{itemize}     \\ \hline
    Anomaly Detector Scheduler & iAnomalyScheduler & \begin{itemize} \item \textbf{checkAnomaly(Trame t)} \end{itemize} \\ \hline
    Anomaly Master & iAnomalyMaster & \begin{itemize} \item \textbf{detectAnomalies(Trame t)} \end{itemize}  \\ \hline
    Anomaly Slave Pool & iAnomalySlave & \begin{itemize} \item \textbf{Process(Trame t)}\end{itemize} \\ \hline
    Missing Measurement Checker & iMissingMeasurement & \\ \hline
    Notification Handler & iNotificationHandler & \begin{itemize} \item sendAlarmNotification(Module m, Details d) \item sendIssueNotification(Details d) \item \textbf{sendAnomalyNotification(Module m, Details d)} \end{itemize} \\
    \hline
    \end{longtable}
\end{center}

\begin{figure}[]
  \centering
    \includegraphics[width=\textwidth]{addRuns/MainDB.png}
  \caption{Run 3: Interfaces of level 3 decomposition of the Main DB}
  \label{mainDB}
\end{figure}

\begin{figure}[]
  \centering
    \includegraphics[width=\textwidth]{addRuns/AnomalyDetector.png}
  \caption{Run 3: Interfaces of level 3 decomposition of the Anomaly Detector}
  \label{anomalyDetector}
\end{figure}

\begin{figure}[]
  \centering
    \includegraphics[width=\textwidth]{addRuns/Run3.png}
  \caption{Run 3: The addition of the Anomaly Detector and Missing Measurement Checker components to the second level decomposition}
  \label{run3}
\end{figure}

\subsection*{Verify and refine}
Because we can now put data in the Main DB UC7 and UC13 are completed. Av1 is also completed as the replicated component group makes sure that the systems are always up and running and ReMeS operators can be notified if some database fails. Anomaly detection can be done, when anomalies are detected, the customer is notified and when the system is in overload modus the scheduler kicks in and schedules according to SLA priorities. This covers everything of P2 and UC10. Incoming trames are send to the Anomaly Detector through the Data Processor. So we can consider UC8 as covered. P3: requests to the measurement database is also completely covered. Thanks to the scheduler requests are scheduled according to their priority. 

\paragraph*{Requirements that are now completely covered:}
\begin{itemize}
\item \textbf{Av1: Measurement database failure}
\item \textbf{Av2: Missing measurements}
\item P1: Timely closure of valves
\item \textbf{P2: Anomaly Detection}
\item \textbf{P3: Requests to the measurement database}
\item \textbf{UC7: Send trame to remote device}
\item \textbf{UC8: Send measurement}
\item UC9: Notify customer
\item \textbf{UC10: Detect anomaly}
\item \textbf{UC13: Send alarm}
\end{itemize}

\subsection*{Remarks}
\begin{itemize}
\item As we can see does the anomaly detector scheduler does not have a buffer. This is to not clutter the design. In this implementation the buffer is embedded in the scheduler.
\end{itemize}
