\documentclass{l3proj}
\usepackage{comment}
\usepackage{textcomp}
\usepackage{placeins}

\usepackage[all]{hypcap}   %for going to the top of an image when a figure reference is clicked


\begin{document}

\title{CS31 Mask Detector and Temperature Measurement Tool}

\author{Andrew Aitken \\
        Liam Halpin \\
        Robert Nimmo \\
        Andreas Vrachimis \\
        Yuhao Wu}

\date{\today}

\maketitle

\begin{abstract}

This paper is a case study of the agile software development process of the Face Mask Detector and Temperature Measurement Application created for the Scottish Innovation Centre for sensing, imaging and Internet of Things technologies, CENSIS. This product was completed as part of the third year Team Project module in the department of Computing Science.

Throughout this paper we will discuss the project itself, reflect upon the software practices which we used, and our final delivered product.

The conclusion discuses the challenges we faced in completing the project remotely and working with a customer on a long-term project.

\end{abstract}

%% Comment out this line if you do not wish to give consent for your
%% work to be distributed in electronic format.
\educationalconsent

\newpage

%==============================================================================
\section{Introduction}

This paper presents a case study of the development of the Face Mask Detector and Temperature Measurement Application. The product allows for the detection of facial features and determines if the user is wearing a face mask correctly, incorrectly or not at all, and if their temperature is under 38 \textdegree{}C.

Firstly, this document will make known the client for whom the software was developed for and discuss their requirements. After which, the software will be presented in its final form and will include detail on its functionality and feature set. Furthermore, the project's development process will be detailed including all issues which arose and the methodology used. Finally, the paper will conclude with discussing the advantages and disadvantages of the development techniques that were employed during the process, and what would be done differently if the project development process were to be repeated.

The rest of the case study is structured as follows. Section \ref{sec:case_study_background} presents the background of the case study discussed, describing the customer and project context, aims and objectives and project state at the time of writing. Sections \ref{sec:requirements_and_features} through Section \ref{sec:issue_management} discuss issues that arose during the project. Section \ref{sec:conclusion} concludes the case study and discusses the key lessons and achievements of the project.

%==============================================================================
\section{Case Study Background}
\label{sec:case_study_background}

%=========
\subsection{Our Client}
\label{sec:our_client}

Our client for the duration of the project was CENSIS. It is a not-for-profit centre based in Glasgow that supports private and public sector organisations in accelerating their pace of innovation in the fields of sensing, imaging, and Internet of Things (IoT) technologies \cite{censis_1}. The organisation's host and administrative hub is the University of Glasgow. 

The team at CENSIS is comprised of individuals who have expertise in engineering, project management, business engagement, finance, marketing, and administration. Our principle contact was Senior Engineer Helen Sandison, and secondary contact, Project Manager John Whelan.

The CENSIS vision lab, launched in November 2018 which Sandison is the technical lead, seeks to help businesses of all sectors to adopt or develop computer vision or imaging solutions. The techniques involved in computer vision, such as event detection, object tracking, motion estimation, and image classification, have a variety of uses in different business environments and sectors. Some examples are in the field of transport - pedestrian detection, driver monitoring, and parking assistance; in manufacturing - package inspection, reading barcodes, and parcel tracking; and most relevant to our project is with the human face - personal device access, biometrics, and photo-tagging. \cite{censis_2}

%=========
\subsection{Project Proposal}
\label{sec:project_proposal}

The core idea was to develop software to detect an individual's face mask placement and temperature. This would theoretically be used for contactless entry into buildings and offices. This would be accomplished using computer vision and machine learning to detect an individual in the frame, detect that they were correctly wearing a mask and analyse their temperature. The final version of the software would be run on the NVIDIA Jetson Nano \cite{jetson_nano}, a small, low-cost computer capable of running object detection, image classification and many other vision-based projects. This would also utilise a Logitech webcam for face mask detection and the FLIR Lepton 3.5 thermal camera module \cite{lepton} for thermal detection. The client also specified that they would like the software to alert the individuals of incorrect face mask placement and if a fever were detected (high temperature).

During the first few iterations, we conducted research into existing projects which used similar technologies as well as setting up our environments and installing the required libraries which we would use to create our software. Since we were required to train a machine learning model, we also had to find suitable images to use as part of a dataset. The client indicated that this dataset could be collated from various sources, such as pre-existing open-source datasets, custom-made artificial datasets with face masks overlaid, or images downloaded from Google using a web scraper. We discussed this at length and chose to use a hybrid method to ensure our dataset was robust.

While ultimately it was our decision which libraries to use, we concluded that OpenCV \cite{opencv} (computer vision library) and MobileNetV2 (visual recognition neural network) \cite{mobilenetv2} - with the aid of Tensorflow \cite{tensorflow} and Keras \cite{keras} - would be the best choice to develop our software. We also completed several tutorials, provided to use by the client, to become more familiar with these libraries. The client also provided information on how to set up the hardware and install the required dependencies.

Furthermore, while gathering the initial requirements, the client stated that our produced software would only be used as a prototype and was not intended for commercial use. We therefore agreed that the software should be fully open-source under the MIT licence. This would allow the customer to freely use, modify, and redistribute the software without restrictions.

%=========
\subsection{The Program}
\label{sec:the_program}

Being completely portable, the system should ideally be initially placed at an entrance of a building. Both RGB and thermal cameras are placed in the same horizontal plane and facing at the same angle, assuming both cameras are capturing identical content. For each captured frame of the RGB camera, a face detection algorithm is applied to obtain the coordinates of all the detected faces. The trained model is then utilised against the faces found, to predict whether each person wears a mask correctly, incorrectly (under the nose or on the chin), or not at all. Additionally, using the facial coordinates of each individual, forehead coordinates are computed. With the help of the FLIR thermal camera, the temperature of each individual is measured and displayed on the LCD, together with the corresponding state of the face mask predictor and the accuracy score of it, as shown in Figure \ref{fig:demo}.

\input{figures/demonstration_photos}

In cases where the measured temperature exceeds the threshold limit of 38 \textdegree{}C, individuals are alerted with an additional message. Figure \ref{fig:hardware} shows an overview of the hardware which was used for the system.

\input{figures/hardware}

%==============================================================================

\section{Reflections}
\label{sec:reflection}

%=========
\subsection{Requirements and Features}
\label{sec:requirements_and_features}

Our customer’s clear view about the purpose of the project was an important factor for gathering the major requirements. Apart from having the required requirements, two more optional requirements had been introduced to both broaden the scope of the project and to test our abilities with completely new technologies.

Our Minimum Viable Product (MVP) consisted of two main features, a face mask detector which determines whether a individual is wearing a face mask or not; and a temperature measurement system which displays an alert when an individual has a temperature above 38 \textdegree{}C upon entering the building. In addition to the MVP, the first optional requirement was adding extra functionality to the face mask detector by also determining whether a user wears the face mask incorrectly. The second optional requirement was a facial recognition system introducing a contact-less entry through a barrier.

Once the project’s requirements had been finalised between the customer and us, MoSCoW analysis \cite{MoSCoW} was our next step. We found this process vital to identify the significance of each feature, which helped us to consistently meet deadlines on time. This technique allowed us to prioritised both major and minor requirements into four categories, considering their importance for the overall success of the project, the time needed for their implementation and the level of effort required to complete them in a satisfactory quality. The acronym MoSCoW stands for the four categories our requirements have been grouped into: Must have, Should have, Could have, and Won’t have right now. 

To effectively and accurately estimate the time required for each issue we used Planning Poker \cite{planning_poker}. The agile planning technique was held before each iteration during the development of our product, focusing on the precise time estimation for both short-term issues and long-term requirements. During the process, each of us justified their estimation with arguments. In some cases, we presented a diverse range of estimates, but after some discussion, the process was repeated until a general consensus was found. Overall, we found that Planning Poker gave us the right perspective on our project’s current stage and how much effort and time are still needed to complete our tasks. Moreover, our overall estimation accuracy has been drastically improved which has therefore helped us on the overall development process by uniformly distributing the work among the team members. 

One of the first challenges that we faced was that Computer Vision and Machine Learning was new to most of us. Due to this, we had to make our estimations large enough to account for any unforeseen complications which could have appeared during the implementation process. Moreover, since the system will be deployed on Jetson Nano and only one of us had access to the hardware during the whole project, estimations involving the hardware were more extensive compared to others.

After we completed our initial research, we all agreed that the MVP should be implemented first. Once this was complete to a satisfactory quality, we could proceed with the remaining optional requirements - if the time remaining was sufficient enough. As a team, we felt confident enough with the completion of the face mask detector feature that we would implement the first optional requirement as we were significantly ahead of schedule. With the addition of the incorrect-mask label in our data set, our model's complexity rose substantially. We therefore determined that this feature would be integrated into the remaining project once the model had been optimised to an adequate level, requiring additional research into various, feasible, optimisation strategies.

Project planning and feature estimation went smoothly, indicating to us the importance of the precise time estimation from Planning Poker. Until the fifth iteration, we were optimistic that we would complete both optional requirements. However, due to the limitations of the hardware, we lost almost an entire iteration researching other approaches in order to complete the thermal detection component. This significantly reduced the remaining development time and we were therefore unable to complete the last optional requirement.

The motivation of the project was to develop an application that could be used to help with workplace or public safety as a response to the COVID-19 pandemic. With that in mind, the project's features and requirements are predefined by the needs of the world for the mitigation of the COVID-19 pandemic. Hence, user stories were not created, mainly due to the detailed requirements provided by our client. Moreover, we found both Planning Poker and MoSCoW analysis equally important and essential for the accurate estimation of the project's tasks and the importance of each requirement, respectively. Establishing a concise project plan in the early stages of the development process helped us to effectively organise our time and finish the project within the time constraints, while keeping the quality of our product at a high standard. Even though the development process was strictly followed according to the project plan during the first few iterations, an optional feature was discarded at the end of the project. As Agile Manifesto \cite{agile_manifesto} reports, we should respond to changes over following a certain plan and this is a very common but essential part of the agile project management process.
%=========
\subsection{Client Communication}
\label{sec:client_communication}

Our team knew how effective communication with our client, and how the role of meaningful updates throughout the process, would help instil their confidence in the project and its progress. Thus, we split our development process into a series of six iterations. These cycles would last around four weeks each and would consist of a set of predetermined goals which we would aim to meet by the end of each iteration.

Meetings with the customer would be held at the end of each iteration and we would inform them of our progress during each cycle, including what we achieved, what we were not if able to do (if any), and then we would discuss with them the priorities to be had by the end of the next iteration.

\begin{table}[!htbp]
\centering
\begin{tabular}{l|l}
\multicolumn{1}{c|}{\textbf{Date}} & \multicolumn{1}{c}{\textbf{Purpose of Meeting}}  \\
\hline
 14/10/2020 & Introduction to Customer and Project Planning  \\
 \hline
 26/10/2020 & End of First Iteration\\
 \hline
 11/11/2020 & End of Second Iteration\\
 \hline
 02/12/2020 & End of Third Iteration\\
 \hline
 20/01/2021 & End of Fourth Iteration\\ 
 \hline
 18/02/2021 & Discussion of issues arisen \\ 
 \hline
 24/02/2021 & End of Fifth Iteration \\
 \hline
 17/03/2021 &  End of Sixth Iteration and Handover Planning\\ 
 \hline
 31/03/2021 & Handover and Final Presentation\\
 \hline
\end{tabular}
\caption{Meetings with Client}
\end{table}
\FloatBarrier

The meetings we held with the client were set to an agenda beforehand and minutes were taken during each meeting in order to avoid confusion about goals or any other miscommunication. The agenda, minutes, and goals for the end of each iteration were posted to our GitLab wiki for both the team and the customer to access if necessary.

Our client was cooperative and responded quickly to any query that we had throughout the development process. During our fifth iteration, we encountered difficulty with the development regarding the perceived issues with the FLIR thermal camera module (as detailed in \ref{sec:technical_issues}). Our client was quick to respond, and provided us with replacement hardware to help solve the issue. Our client being responsive and eager to help ensured that there were little or no communication issues during development.

Interactions with the customer, as was the case with interactions between the team members, were performed remotely throughout the process using Microsoft Teams and email due to the COVID-19 pandemic.
%=========
\subsection{Technical Issues}
\label{sec:technical_issues}

Throughout the project, we faced multiple setbacks which affected our iterations. Early on, we started to run into issues setting up a CI pipeline on GitLab. This became a running issue we faced through the first semester until we successfully got a pipeline setup. 

For continuous integration, Fowler suggests to “Test in a Clone of the Production Environment” \cite{fowler_test_clone}. Our production environment includes the Jetson Nano itself and additional external hardware including a Logitech RGB webcam and a FLIR Lepton 3.5 thermal camera, which interacts with the world around them. This caused a problem for our software, to find a solution that could adequately test our code, in an ever-changing code base, while in the restraints of the pipeline.

As a result of the limitations, our tests were more limited, as we could not test areas of the codebase without a real-time camera input. Our tests focused on the dataset scripts which could be tested a lot easier without needing to interact with the surrounding world and checking that we are following the PEP 8 style guide \cite{pep8} to improve the readability and consistency of our Python code using the linting tool, PyLint \cite{pylint}.

This approach to testing resulted in our tests lacking the rigour that is usually required for agile software development. Our tests did catch bugs in our code, which would have not have been spotted quickly without the tests. We could have improved our test by splitting our code into more easily testable modules. This would have allowed our tests to reach a larger proportion of our codebase, but we would still be facing problems testing code that was dependent on hardware.

After deploying the mask detector script for the first time on the Jetson Nano, we quickly realised that the script and mask model would have to be optimised to improve the performance while running on the Jetson Nano.

One of the changes we spent a period of time trying to implement was optimising the face mask detector model into a TensorRT \cite{tensorRT} accelerated model. This became a lot more challenging than anyone expected and slowed down our development. We expected we would have been able to train the model as a Keras model, as we already were doing; convert the Keras model into a TensorRT accelerated engine and then deploy the engine using Python to run the live predictions. Unfortunately, this was not the case. This was down to lack of guidelines and documentation which significantly affected our development speed. 

Due to the lack of guidelines and after discussing the issue with the customer we concluded that we should move away from further optimising the model and come back to the problem, if we have enough time, at the end of the project after completing all of the required requirements first. In hindsight, we should have discussed the issue earlier with the customer and kept them further informed about the problems we were facing instead of putting a large number of resources into researching into this.

During the project we had only one Jetson Nano, this caused difficulties for team members to test changes in software quickly without the help of the team member who had the Jetson Nano. The software could not be run properly in headless mode through SSH and using a remote desktop connection resulted in a poor framerate, which was one of the metrics we were often testing on the Jetson Nano. This was a specific issue that caused a lot of problems when optimising the face mask detector model into a TensorRT accelerated model.

Working with hardware was a specific challenge we found throughout the project. The Lepton 3.5 thermal camera \cite{lepton} caused major issues when trying to connect to the Jetson Nano. We expected the thermal camera integration with the mask detector to be a lot simpler than it turned out to be. 

Due to the lack of documentation to connect, debug, and run code for the Lepton 3.5 thermal camera using Python, it resulted a large amount of time spent fixing issues with the connection. As no one in the team had much experience and only one member able to actively debug the issue, it caused a large setback in our development process which we never foresaw.

Initially we used a SPI FLIR Lepton camera breakout board \cite{spi_breakout}, however, due to the lack of supported libraries and source code for Python which utilised the FLIR Lepton 3.5 thermal camera with a SPI connection using a Jetson Nano, we could not get this method to work. Initially we thought this could be a defect with the Lepton 3.5 thermal camera or the breakout board. After discussing the issue with the customer, they agreed to replace both pieces of hardware. Unfortunately, the replacements did not fix this problem and we decided to look at alternative approaches to the problem.

We were able to find an alternative approach which utilised the PureThermal Mini - USB breakout board \cite{purethermal_mini}. This approach was a lot easier to work with as it ships pre-configured to operate as a plug-and-play UVC 1.0 USB thermal webcam that will work with standard webcam and video apps on all major platforms, including the Jetson Nano. Using the FlirPy library \cite{FlirPy} we managed to complete the feature quickly and integrate it with the rest of the codebase. 

This issue was one of the trickiest to fix and manage. Since the camera was required to complete a required feature of the application, we had to manage the expectations of the customer while finding a solution to the problem. In retrospective, we should have tested the hardware compatibility earlier in the project, so we knew if the hardware is compatible, but due to the complexities of a completely online project, where we had only one Jetson Nano and delivery of hardware, this was not as easy as we hoped. In future project, where hardware is used, we would focus on getting the hardware earlier in the project to potentially mitigate similar problems.

%=========
\subsection{Issue Management}
\label{sec:issue_management}

Version control was fundamental to our workflow and productivity throughout the project. While working on our \emph{Develop} branch, we assigned ourselves to issues to make sure that we knew what one another was working on. This was usually communicated during our weekly scrum meetings or through Microsoft Teams. We agreed that frequent communication would avoid overlap when working on a new feature, thus avoiding conflicts. Once the feature or change had been implemented, the code was pushed to the \emph{Develop} branch. We also implemented a pipeline which performed static analysis and testing on each branch. This was vital as our software contains lots of sub-components which need to work seamlessly with one another.

When an iteration concluded, we created a merge request for the branch which would be reviewed and approved by another member of the team. Once approved, the branch would merge into the \emph{Master} branch.

We performed a retrospective at the conclusion of each iteration, prior to the meeting with the client. This was done on the website known as \emph{Metro Retro} \cite{metro_retro}. Our set-up consisted of three sections: \emph{Start}, \emph{Stop}, and \emph{Continue}. We began by taking a few minutes to add items to each section, discussing each point as it was added. We then voted on, what we deemed the most important issue for each column, that we should focus during the upcoming iteration.

Once the votes were counted, we placed the top three issues into an \emph{Actions} section. These actions were then converted into issues to allow us to track both individual and collective improvements during the next iteration.

This process was extremely useful in finding areas in which we could improve. In the early stages of the process, it allowed us to quickly identify the strengths and weaknesses of each team member as well as what our main priorities were for the upcoming iteration. It also allowed us to see how our practices improved throughout the process.

However, at times we experienced some difficulty thinking of items to place on the board, particularly in the \emph{Stop} section. Furthermore, in the initial stages of the project, we would occasionally underestimate the time taken to resolve a given issue. 

In response to this, our team coach recommended Planning Poker. We used the website called \emph{Planning Poker Online} \cite{planning_poker_online} to work through each issue in turn, assigning them scores based on their perceived level of difficulty. This allowed us to effectively split larger, more complex issues into smaller, simpler ones. We tried to include the whole team in these sessions to ensure that each member had the time to voice their opinions on upcoming features and issues \cite{mike_cohn_2} as well as propose any new issues.

While we agreed that we would divide issues as far as possible, participating in planning poker sessions allowed us to put this into practice and get a better idea of what our main priorities would be. This helped maintain focus and productivity throughout the development process.

Mike Cohn suggests using planning poker to allow feature prioritisation and to make longer-term predictions \cite{mike_cohn_1}. As we incorporated planning poker early in development, we were able to use our experience in later stages where problems arose (as described in Section \ref{sec:technical_issues}), allowing us to better gauge the time taken to implement a given feature or fix an issue. This was particularly important when we were implementing more challenging features such as Thermal Detection.
%=========
\subsection{Programming Focus}
\label{sec:programming_focus}

As a team, we conducted vast amounts of research in the initial iterations, due to our lack of knowledge with computer vision and machine learning technologies. Our client provided us with several tutorials which taught us how to use OpenCV \cite{opencv}, Tensorflow \cite{tensorflow}, and Keras \cite{keras} to ensure we had enough confidence before we began development. As we implemented the face mask detector, only three of our five members worked on it with the other two conducting further research into optimising the dataset and setting up the pipeline. This was changed later in the process, with the entire team working on the code to ensure we all had the same level of familiarity as each feature was developed. 

As mentioned, the project largely focuses on deep learning and computer vision. At the beginning of our development, we had a brief distribution to each team member of different parts in our workflow. The workflow for the program started from dataset collection and pre-processing; following up with training the face mask model and then face detector and finally setting up the hardware and deploying. With this workflow, most of the sections could be done in parallel. However, some issues relied on others to be completed first which at times halted progress. 

In order to resolve this, we utilised pair programming where team members worked together to complete the problem. Pair programming was organised as needed when someone required help or to improve the team members' overall understanding of the codebase. We also had weekly scrum meetings to discuss our current progress and to run retrospectives when required. This was useful since everyone would be able to see what was accomplished the previous week, how well we were doing, and what our upcoming goals were. We also discusses the issues we encountered with the team to resolve them quicker.

Code reviews were completed bi-weekly. We used the code reviews to refactor complex areas of the code base and to fix linting errors which were flagged either from the pipeline (using PyLint \cite{pylint}) or from individual coding sessions, to keep the code consistent with PEP8 \cite{pep8} standards. The code reviews allowed us to go through all the changes which we had made and allow team members to familiarise themselves with new areas of the codebase which they had less experience with. We could also resolve any merge conflicts or any failed tests flagged from the pipeline. Once everyone was satisfied with the code review, we would create a report of what we covered before publishing it on the GitLab wiki page. As the process continued, we found less errors in the code we were writing, leading to less time spent on code reviews, and more times spent on implementation.

%=========
\section{Conclusions}
\label{sec:conclusion}

Due to the COVID-19 pandemic, this project was developed and delivered virtually. This piece of software was also the first time our team had worked on a large, long-term project, and it was a valuable learning experience for us all. Given our lack of knowledge regarding computer vision and machine learning, it was also quite challenging at times, especially during the initial stages. This was also the first project where we were developing software for a specific piece of hardware so we had to ensure the code was optimised as far as possible. 

As a team, we placed a large emphasis employing agile software development practices, dividing our workload into iterations, and conducting regular meetings. We also made sure that we were regularly evaluating our performance through retrospectives and code reviews, to ensure that our final product was to a high standard.

Together, we maintained a frequent level of communication to ensure that we all knew exactly what our tasks were to prevent any overlap. This was crucial when problems arose, particularly in the later stages nearer delivery. As the case study discusses, we encountered several problems, mainly due to hardware constraints. As this project was developed and delivered in a virtual setting and only one member of the team had access to the hardware, constraints occurred in our ability to diagnose issues. This was the case during the fifth iteration, with issues connecting the hardware together; a significant amount of time was spent on trying to diagnose this issue due to only one team member able to physically interact with the components. If remote working was not the only option, perhaps this may have been resolved sooner and less of the iteration would have been consumed by this issue. Despite this, our persistent communication significantly aided us and allowed us to complete the product on time.

Furthermore, this was our first experience of working alongside a real-world client as our work up to this stage had been purely in an academic setting. We gained vital experience in how to effectively communicate and reason with our client, as well as improving our written and oral presentation skills. This is experience we will likely use in future projects and in industry.

In addition, prior to this project, most of us also lacked significant experience with version control systems. While we all knew most of the fundamentals, we did not have any experience with any advanced practices such as using multiple branches, submitting merge requests, or setting up a pipeline. Learning how to effectively utilise these practices has helped us considerably in both the quality of the final codebase and the speed at which we developed the software. Frequent application of these methods has also given us more confidence when working with version control both individually and as a team.

Reflecting back, there are aspects of our work we may well decide to change. Considering our lack of knowledge with machine learning and computer vision, we spent a significant amount of time researching during the initial iterations. Having a better understanding of those technologies, it help us to establish our core idea of how our model should be implemented and function. However, as shown in the case study, we were able to overcome these issues and work effectively as a team to deliver a high standard application according to the specification.

%==============================================================================
\bibliographystyle{plain}
\bibliography{dissertation}

\end{document}